A Different Cosmology—Thoughts from Outside the Box

In this paper, we present a new cosmology based on the idea of a universe dominated by vacuum energy with time-varying curvature. In this model, the universe began with an exponential Plank era inflation before transitioning to a spacetime described by Einstein’s equations. While no explicit model of the Plank era is yet known, we do establish a number of properties that the vacuum of that time must have exhibited. In particular, we show that structures came into existence during that inflation that were later responsible for all cosmic structures. A new solution of Einstein’s equations incorporating time-varying curvature is presented which predicts that the scaling was initially power law with a parameter of y=1/2 before transitioning to an exponential acceleration of the present-day scaling. A formula relating the curvature to the vacuum energy density is also a part of the solution. A non-conventional model of nucleosynthesis provides a solution for the matter/antimatter asymmetry problem and a non-standard origin of the CMB. The CMB power spectrum is shown to be a consequence of uncertainties embedded during the initial inflation and the existence of superclusters. Using Einstein’s equations, we show that so-called dark matter is, in fact, vacuum energy. A number of other issues are discussed.


Introduction
The following Figure 1 is a preview of three of the many predictions of the model described in this paper. The first shows the acceleration of the scaling that follows from our solution of Einstein's equations. The second shows the predicted fact, this model has such unquestioned acceptance that researchers seem to have forgotten that it is just a model. We are going to assert in this paper that the FRW model and the view of cosmology that follows from it are incorrect.
In place of the FRW model, we are going to advance a new view of cosmology that challenges much of the standard viewpoint. What we will show is that a Plank era inflation of the vacuum leading into the Einstein era can account for all the major features of cosmology provided that the curvature of the vacuum varies with time. This idea results in a consistent story that makes a considerable number of predictions that are in agreement with present-day observations. Before getting into the details of the model in the body of the paper, we will present some background materials that will help to establish the ideas. No one will argue with that the fact that a truly staggering amount of energy came into existence during the Big Bang. Two questions that immediately arise are where did this energy come from and what form did this energy take? As to the second question, energy is not a substance and so can only exist as a condition or property of something else. In the FRW case, it is assumed that the "something else" was either the field of an exotic meson or radiation or both. In this new model, we will argue that the "something else" is vacuum energy.
We can actually answer a narrow interpretation of that question immediately if, as it is generally assumed, existence began with the Big Bang. The energy did not "come from" anywhere because there was no "from". Existence began with the Big Bang and so our universe defines the totality of existence and any model of the Plank era must reflect that fact. There is no "outside" or "time" beyond our existence so we cannot talk about a period of "time", large or small, that elapsed before the Big Bang. The Big Bang simply happened. The idea that the universe defines existence also precludes the idea of multiple universes that share some sort of simultaneous existence. The keyword is simultaneous since it would be impossible to say whether separate existences occurred before, during, or after our existence and at our location or somewhere else since such distinctions are meaningless without some degree of shared existence.
Subsequent to the Plank era, we are on solid ground because Einstein's equations can be used and here, we present a new solution of the equations based on a metric with time-varying curvature that describes the evolution from the end of the Plank era onwards. The Plank era, on the other hand, presents a real problem because Einstein's equations are not applicable and we do not as yet have an alternative. Not only do we not have an analytical model to describe this era, we don't even have a convincing framework that can be used to talk about it. Nevertheless, as we will show in this paper, we can say quite a lot about the properties of the vacuum that came into existence during that era. These ideas will be developed as we proceed and a summing up will be given at the end of the paper.
As a starting point, consider for a moment the issue of measuring an interval of time. In order to do so, one must have a clock whose ticks are of shorter dura-Journal of High Energy Physics, Gravitation and Cosmology tion than the interval to be measured. Carrying this back in time, the ticks must get smaller and smaller until eventually we reach the Plank era. What we are proposing here is that that is as far as one can go. There is an ultimate tick and its value is the Plank time. Similarly, the ultimate length is the Plank length. The consequence is that there is fuzziness limiting the degree to which spacetime points can be specified which leads to a concept of uncertainty in which uncertainty is not a condition of some field but of the coordinates. (There is a large literature under the general heading of non-commutative geometry (NCG) which attempts to formalize this concept but these models fall outside the range of ideas that we are asserting here. We will have more to say about this at the end of the paper). The curvature of spacetime is presumably continuous but we cannot say precisely what the curvature is at "some point" because, in part, we cannot say precisely what we mean by "some point". It follows then that existence did not begin at a point but within a Plank-sized volume with a time uncertainty (from our point of view) equal to the Plank time.
In conventional field theories, one typically works in a spacetime that can be described by a differential manifold. A significant difference between such fields and this new model of spacetime is that, while the former has limitations or uncertainties that limit one's knowledge in some way, no limitation is placed on our ability to distinguish between two points arbitrarily close together either in time or distance, a point that is essential if we are to describe spacetime in terms of a differential manifold. One of the tenets of our new model, on the other hand, is that there is such a limitation.
The uncertainty principle requires that the initial vacuum energy was encapsulated within a Plank-sized volume with a magnitude uncertain by an amount given by 2 t E ∆ ∆ ≥  . Substituting the Plank time, we find, aside from a factor of 2, that E ∆ is the Plank energy with a corresponding energy density equal to 2 7 2 vac c c G ρ =  . The manifestation of this energy must have been the curvature of spacetime since there was no other existence. Given this fact, we can go one step further to conclude that the Plank energy density is the maximum possible energy density because a larger energy density would necessarily require a curvature more compact than given by the Plank length which we have just asserted is the smallest possible dimension. In some way then, the vacuum energy's existence was connected with the uncertainties of time and dimension. Another consequence of this uncertainty is that our normal concept of causality is not applicable and, as we will show, this had a number of important consequences.
Even though we believe that uncertainty was a crucial element during the Plank era, we don't believe that quantization had anything to do with this which is one of the points of departure from the NCG models. In fact, we don't believe that it makes any sense to talk about the quantization of gravity at all. Quantized fields describing ordinary matter all share a few general characteristics such as that they can be localized, have identifying properties such as mass and can exert forces on one another. Gravity exhibits none of these characteristics. It cannot be G, after all is the proportionality between energy and curvature and only incidentally when one takes a Newtonian point-of-view does it become a proportionality between mass and force.
Moving on from the Plank era, we know that Einstein's equations are non-linear but, in conventional usage, one might say only in a trivial way. If one distorts spacetime at one point, the Ricci tensor components, which happen to be nonlinear, propagate that distortion to the surrounding spacetime in such a way as to maintain continuous derivatives but spacetime is passive in this process or in other words, spacetime is not acting as its own source. In the new model, we interpret the equations differently to achieve a set of equations that are non-linear in a non-trivial way. If we specify, a priori, some distribution of ordinary mass/ energy, it will give rise to some configuration of curvature but more generally, in our new model, instead of ordinary matter being the source of the energy or curvature, the curvature of spacetime itself becomes the source and we derive an exact expression of this idea. In addition to the equations that relate geometry to energy density and pressure, we will find that conservation of energy-momentum demands that the curvature of spacetime at any point is proportional to the sum of the vacuum energy density, pressure and any matter at that same point.
We now wish to consider the generally held belief that on the largest scales, the universe is homogeneous and isotropic. One models such a universe in terms of a sequence of hypersurfaces each of which is homogeneous and isotropic. Expressed in terms of a symmetry of spacetime, this leads directly to the requirement that the spacetime curvature of each hypersurface must be constant. To build a complete model of the universe, however, these hypersurfaces must be strung together in some way and symmetry arguments say nothing about how this is to be done. This brings us to the second and quite independent idea which is how the universe should appear to fundamental observers. When we speak of appearance, we are speaking about light that reaches us from distant objects and of necessity, that light will have passed through a sequence of hyperspaces. In the FRW case, it is assumed that that all hypersurfaces have the same constant curvature with the result that the universe will appear homogeneous and isotropic to fundamental observers. In this new model, on the other hand, we assert that the curvature varies with time so while the universe will appear isotropic to fundamental observers, it will not appear homogeneous even though each hypersurface on its own is homogeneous and isotropic.
In the first part of this paper, we will consider the inflation during the Plank era. As noted above we do not have a proper model to describe this period but nevertheless we will establish some important facts about the evolution of the Journal of High Energy Physics, Gravitation and Cosmology universe by examining a simple model. Having done so, we will then consider in some detail the correct form of the metric for a spacetime with time-varying curvature. The resulting equations are then solved. Two important results that follow are a prediction of a present-day exponential expansion of the universe independent of any parameter adjustments and that the time-varying curvature of spacetime is proportional to the vacuum energy density. Next, we will present a detailed non-conventional model of nucleosynthesis and the origin of the CMB which, incidentally, contains a solution of the so-called Lithium problem. Still later, we show that dark matter is, in fact, vacuum energy. Next, we discuss the origin of cosmic structures and the power spectrum of the CMB from which we discover that all such structures had a common origin in an imprint that was embedded in the vacuum during the Plank era inflation thus bringing us back to our starting point.

Plank Era
We will begin with some order of magnitude arguments that connect the initial curvature of spacetime with the total energy of the universe. As we proceed, we will need the values of some basic parameters and while there is some uncertainty about these, there does seem to be some consensus that the following values are reasonable (subscript 0 denotes present-day values) with the age of the universe having the smallest uncertainty. ( )( ) 4 2 00 4 3 . G c c p ρ = π + R  Making use of the facts that the scaled Ricci tensor has the dimensions of (length) −2 and that it embodies the geometry that defines the curvature, we can define a parameter we will call the characteristic radius of curvature or norm of the Ricci tensor Later, once we have proposed a metric, we will show that the characteristic curvature is equivalent to the Ricci scalar. Ignoring the pressure term and equating these two gives us a connection between the radius of curvature and the vacuum energy, ( ) We asserted earlier that no dimension can be smaller than a Plank length and so we find that packing the total energy into a Plank-sized volume is impossible.
We next ask what volume is necessary to contain the present-day energy of the universe without exceeding the Plank length limit. Again using (2)(3)(4) we have, The conclusion we draw from this is that by placing a limit on the minimum possible distance, we place an upper limit on the allowed energy density of spacetime which echoes the arguments we made in the introduction.

A Simple Model
We now wish to develop a model that will allow us to probe the initial expansion of the universe. The main problem we have is that, as dimensions approach Plank dimensions, our normal notion of differentiation is no longer applicable and as noted earlier, the concept of causality becomes an issue because of the uncertainties of both time and dimension. In order to build a model, we must first have a mental picture of the process we are trying to understand. One concept that comes immediately to mind is the idea that the universe began as a Plank-sized volume that underwent an exponential inflation and, in fact, this is the model we will develop in what follows. As the development proceeds in this paper, however, we will be forced to recognize that this concept is only a partial solution because, as we will prove, structures developed in the vacuum that were both very smooth and vastly too large to be explained within the constraints of normal causality. If we suppose, on the other hand, that some sort simultaneous beginning over a volume much larger than a Plank volume occurred, we are faced with the even more intractable problem of explained the existence of some influence that coordinated all these simultaneous beginnings. It seems most likely that our first idea is closer to being correct but with causality expressed in terms of an essentially unlimited speed of influence. The word influence is used here to distinguish this idea from radiation which was definitely not a part of Journal of High Energy Physics, Gravitation and Cosmology this process. We will build on this idea throughout this paper and finally tie things together at the end of the paper in Section 16.
We will now define a simple model embodying three general constraints that result in an exponential inflation. The significant result that follows from this model is that the total vacuum energy equaled the present-day energy of the universe at the end of the inflation and that the end of the inflation occurred when the uncertainty in time became small relative to the age of the universe. Our contention is that even though the model isn't correct, the physical picture that emerges is valid. Keep in mind that while Einstein's equations had no meaning during the Plank era, the correct theory must approach Einstein's equation asymptotically for times large compared to the Plank time so the use of an Einstein equation model is not entirely unwarranted.
The first of these constraints is that the acceleration of the scaling is dependent on the energy density and the pressure. For the purposes of this argument any metric that embodies the idea that energy density slows the expansion and a negative pressure accelerates the expansion will work. Since the time coordinate Einstein equation of the FRW metric provides the simplest expression of this idea, that is the equation we will use.
We have introduced the parameter where p is the familiar perfect fluid pressure term. Note that we have introduced a minus sign in the definition of f.
The next idea is that the initial expansion was non-adiabatic. First, there was nothing and later there was something so the expansion was definitely nonadiabatic. For a closed, or adiabatic, system, energy-momentum conservation requires that 0.  (3)(4) In the previous section, we established that the total energy could not have been simply dumped into a Plank-sized volume so the only alternative was that the energy was realized over a span of time sufficiently long to allow the energy of the universe to reach its present value without the energy density limit being exceeded. The simplest modification that incorporates the idea of a non-adiabatic expansion is to simply add a source to the right-hand side of (3)(4) ( )( ) ( ) By construction, the covariant derivative of the left-hand side vanishes but in order to model the introduction of existence, the covariant derivative of the right-hand side must not vanish. To fix things up, we could add a "source" term to the left-hand side, where the source represents vacuum energy that varies with time but not location. In fact, it could not vary with location because during the inflation, there was not yet a well-defined concept of location. Calculating the covariant derivative of both sides results in ( ) Such a term would imply that the vacuum is, in fact, its own source which is consistent with the notion that the energy arose within the vacuum as a consequence of uncertainty. Carrying this further,  and (3)(4) become Since we expect the source to lead to an increase in the energy density, from (3-9b) it is apparent that its time derivative must be negative. This in turn requires that initially the source or curvature must have been maximal which is in accord with the arguments given earlier. This model also provides a built-in cutoff of the source corresponding to the time at which ( ) 0 S t = . Finally, we come to the third constraint which is simply that the energy density of the vacuum cannot exceed the Plank energy density, i.e. It is important to appreciate that even though we have borrowed two of the FRW equations for our model, the interpretation of these equations is very different from the FRW interpretation. The new model asserts that the universe began as spacetime vacuum with a high degree of curvature that we interpret as energy and that this energy is not related in any way to ordinary matter.
We now wish to solve this set of equations given some initial conditions. Getting to practical matters, there is some arbitrariness in how one chooses to define the coordinates. In this paper, we will choose the radial coordinate to have no units so it ranges in value from 0 to 1. Next because of the huge range of values of the scaling, it is useful to express time and the scaling in terms of Plank dimensions. We define a variable ( )  . This definition of P t follows from the constants of the field equation as will be shown below. The scaling is defined similarly in terms of a t a α = (3)(4)(5)(6)(7)(8)(9)(10)(11) where P a is the actual Plank length ( 35 1.6 10 m − = × ) We also define the function ζ to be the ratio of the energy density to its limiting value.
This is an initial value problem in which we begin with a Plank-sized volume and the source so we have ( ) which can be done in several ways. We considered several possibilities but found that the end result is much the same no matter what assumptions are made. In all cases, the evolution divides into 3 phases. The first, which we will call the inflationary phase, is the period during which the source is non-zero and the energy density is at its maximal value. Said another way, it is during this period that the covariant derivative of the energy tensor is non-zero. The second phase, which we will call the transition phase, is the period during which f decays to zero and the third is the Einstein era. The inflationary phase can be further subdivided into an initialization period which lasted for 3 Plank times or less followed by the actual inflationary period.
We are now in a position to examine some results. The numerical integrations were performed using the standard 4 th order Runge-Kutta method.

Inflation
What we found was that the model results are for the most part insensitive to the details of the initial conditions or the transition pressure decay. Either the scaling curve had the shape shown in Figure 2 or there is no solution at all.
The detailed evolution during the initialization period varies considerably depending on the initial conditions chosen but it never lasts more than 2 or 3 Plank times and once the inflation begins, the differences cease to matter because the inflation is essentially the same in all cases. The end of the inflation generally occurs Figure 2. Typical initial evolution of the universe. at a value of 3.8 -3.9 I τ ≈ or about 46 -49 Plank times. Following the inflation is the transition era which eventually ends at a value of τ somewhere between 6 and 8 or a time between 400 -3000 Plank times.
No matter how we start things off, at some point, the energy density reaches its Plank limit at which point d d 0 ζ τ = . From that point on, the pressure ratio is fixed by the (3-17c) so we have If we substitute this into (3-17b) with 1 ζ = , we find that where b is a constant. The significant point is that β , and hence α , is an exponential in τ which means that there is an exponential inflation of the scaling, tive constant, ignoring f results in a differential equation for ( ) a t which has the solution given by . Thus, in either model, we get an initial exponential inflation of the universe.
What we can now see is that the Plank limits fit very nicely with the model's prediction of the evolution. It was only during the Plank era that the non-adiabatic condition existed which suggests that the nature of the source is connected with the fuzziness limiting the degree to which spacetime points can be specified. As we mentioned earlier, an uncertainty of time equal to the Plank time implies an uncertainty in the energy density equal to the Plank energy density. The end of the inflation occurred at about 46 Plank times which means that the source cutoff, based on the energy argument, happened at about the time that the overall time scale was beginning to be large compared to the Plank time and the corresponding energy uncertainty would have become small.
We started by specifying the total energy but now turning this around, we now see that that model is really saying that the present-day energy and size of the universe was fixed by the condition that the uncertainty of the vacuum energy had become negligible and our contention is that this result is a statement of fact rather than a result limited to this particular model. This is the principal result of this portion of the paper. Existence began as a vacuum with Plank uncertainties of time and distance that then became realized when time and distance became large compared to Plank dimensions.
With the ending of the inflation, the evolution entered the transition period which was not only a transition from an exponential expansion to perhaps a power law expansion but also a transition from the Plank era to the Einstein era.
The model predicts that the end of the transition occurred at precisely the point in time at which Einstein's equations would be expected to have become valid.
The "creation" of vacuum energy had long since ended and the granularity of the coordinates was by then a very small fraction of the current age so one would then expect a differential manifold description to be a reasonable approximation.
Initially, during the transition phase, the energy density and pressure dropped very rapidly and the constraint given by  was no longer valid. Eventually, f vanishes but we cannot just set 0 f = at the end of the inflation because the solution fails if we do so. It is necessary then to postulate a decay model for f and while the equations do not give us an explicit expression for the decay, they do impose a constraint on the decay rate. Various decay models were tried (linear, exponential, Gaussian) and it turned out that the results are not sensitive to the particular choice made. The exponential model seemed to be the most reasonable since the other quantities are exponential so that is the model used to obtain the results shown below. The formula we used was ( ) where the subscript "I" denotes values at the time of the source cutoff and 2 τ is an adjustable parameter.
After the source cutoff, (3)(4)(5) can be rewritten as which shows that the pressure does act as a source during the transition period.
After running a number of simulations, we determined that the minimum value of the decay parameter 2 τ is about 4 and that the effect of increasing 2 τ is to postpone the end of the transition to some degree. Each time we adjusted the decay parameter, we also had to adjust the source cut off so that the final energy matched the present-day energy of the universe.
Eventually, the pressure ratio vanishes, the total energy reaches its final, constant value and we enter the Einstein era. Since the total energy is proportional which agrees with the previous equation with 0 f = . We can now easily obtain an exact solution. We define a scaling parameter γ by where T t is the time at the end of the transition period. Substituting into (4)(5)(6) gives the energy density ( ) and substituting both into (3-1) gives Substitution back into (4-8), we find for The point at which 2 3 β = is the point at which the total energy ceased to change.
In order to test the sensitivity of the model, we tried a number of different scenarios to determine if the initial conditions were important. What we found was that the solution is insensitive to the initial conditions. The arguments based on uncertainty suggest that it makes the most sense to assume that the universe will work. Note that this value is actually quite small given that the value of β at the end of the inflation is about 46.
We also examined the sensitivity to the source strength. We won't show the results but it turns out the similar results are obtained for any σ in the range 0.01 1.5 σ < < . For larger values of σ the solution becomes erratic and by 2.0 σ = , the pressure is no longer large enough to prevent the collapse induced by the energy density. For smaller values of σ , a solution still exists but with a decreasing source cutoff time and an increasingly long transition period.
After running many simulations, we found that the following set of parame- The total energy is in reasonable agreement with the value of (2-1) and we also see that value of 0 a is only a little larger than the value given in . Looking back at  and (3)(4)(5), we see that they are linear in the energy density. This is a consequence of the fact that the equations do not include self-interactions of the field. With self-interactions, on the other hand, the equations will not be linear in the energy density and further, these self-interactions will tend to slow the expansion with the result that the end of transition period will occur at a somewhat later time than given by 2 3 γ = criterion which, after all, is simply the asymptotic limit of the particular simple model we used. In fact, as we will show below, observations and the requirements imposed by the existence of the CMB require that the scaling parameter during the post-Plank era had a value of 0.5 γ ≈ , a value which characterized the expansion up until about the time of galaxy formation. Subsequently, the exact solution of the scaling which we have found shows that the scaling began an exponential acceleration.
Evidence in support of this view follows from observations of the Hubble parameter. By definition, which for power-law scaling has the value, The actual value of the Hubble constant is still a matter of debate. For the purposes of this paper, we will use a value of with the understanding that an adjustment will be required later. It is important to note that these results follow directly from our model of the vacuum and have nothing to do with ordinary matter which didn't appear on the scene for another 10 38 ticks of our Plank clock. The expansion of the universe was and is controlled by the vacuum energy from start to finish.

Curvature
One of the principal tenets of the new model is that the curvature must vary with time and consequently that the FRW field equations do not correctly describe the evolution of the universe. By dimensional arguments, the FRW curvature K must be related to the curvature parameter k by With the assumption that k is proportional to a linear combination of the energy density and pressure, and using the fact that the coupling between energy and curvature must include a factor of G, we find that the only combination of variables that has the correct units is and, with this result, the radius of curvature is which we see varies linearly with time. Later we will find that the linear dependence is, in fact, an exact result independent of the scaling. At the transition time, C R was three orders of magnitude larger than the Plank distance. This value is not unreasonable from the point of view of just having exited the Plank era but it is widely different from the curvature of the essentially flat universe of the present day. Equation  suggests that the curvature varies as 2 3 t − but, as we will show in Section 8, this is a special case of the exact result that k t a t t ∝ .

Radiation
In some versions of the FRW model of the Big Bang, it is posited that energy was dumped into the nascent universe in the form of radiation. We have already demonstrated that the evolution of the universe can be understood without reference to radiation but we now will go even further and argue that radiation during the initial evolution of the universe was not even possible.
The initial inflation ended at a time around and thus the maximum distance any radiation could possibly have traveled in that time was on the order of 1.3 × 10 −33 m. But that isn't the whole story because any radiation would have been restricted to the geodesics of the metric (assuming that such a concept had any meaning during the Plank era). Since the radius of curvature at that time was given by the Plank length, any extant radiation would be turned back onto itself in a volume also given by the Plank length. This being the case, instead of going somewhere, the radiation would be confined to the minimal possible physical dimension. At the same time that the radiation wasn't going anywhere, the scaling was increasing by a factor of 10 19 to a value on the order of 10 −16 m so even if there had been some form of radiation present, it would have been impossible for a signal to propagate from any one point in the universe to any other point. We will give a more formal statement of this re-Journal of High Energy Physics, Gravitation and Cosmology sult in Sec. 8 where we show that what we will call the horizon distance is, in fact, equal to the radius of curvature and during the inflation, the radius of curvature was fixed at one Plank length.
We also know that any extant radiation must have been deposited by the initial energy source and so by the end of the inflation, all the radiation that would have existed in the early universe must have, by then, been there. But since radiation could not have existed during the inflation, it cannot have been around shortly afterwards either.
In the standard model, the radiation is supposed to have somehow come into existence at the termination of a period of inflation although the model does not actually explain how that happened. The time at which this was supposed to happen would have been long after the end of the Plank era on a logarithmic scale. In this new model, on the other hand, it is asserted that the radiation came into existence at a still much later time during nucleosynthesis and there was never a period during which the expansion was dominated by radiation.

Homogeneity and Isotropy
In the standard model, the curvature is assumed to be fixed and a consequence of that is that at an early enough time, it would have been possible for any point in the universe to be within the horizon of any other. That being the case, it is assumed that any initial anisotropies would have been smoothed out. That is, in itself, a big "if" since smoothing requires sufficient time for mixing to occur and the time scale involved is limited to only about 10 −35 s. The next problem was how to propagate that uniformity to the present-day without large in homogeneities developing via the interactions of different regions of space time. The solution was to imagine an inflation in which the spatial dimensions outran the signaling distance thus preserving the initial uniformity.
Another assumption made is that the conventional inflation was adiabatic. A consequence of this is that the entropy at the end of the inflation would have been the same as at the beginning when it is assumed, it is was of ( ) 1 O . The present-day entropy, on the other hand, is thought to be on the order of ( ) and in the conventional model, this huge increase is assumed to have happened during a period immediately after the end of inflation when the energy of the inflation mesons was converted into the radiation plasma. The conventional model assumes that the inflation was driven by the action of an exotic meson but makes no attempt to explain the origin of the exotic meson field, which itself would have been a non-adiabatic event. In the new model, the situation is quite different. First, the horizon distance was fixed at the Plank length during the inflation so there was never a time during the Plank era when all points or even a few points in the universe could communicate in a conventional manner. As the inflation progressed, more and more Plank-sized regions came into existence but each was isolated from all Journal of High Energy Physics, Gravitation and Cosmology The entropy is quite sensitive to the value of τ at the end of the inflation, however, so this value will require some adjustment. Since the actual initial expansion of the universe would have been slower than 2 3 γ = , the inflation would of necessity extended for a slightly longer period of time in order to match the present size of the universe. We will return to this point in Sec. 8.
At the end of the inflation with the source cut off, the radius of curvature began to increase faster than the scaling. Remember that the defining condition of the inflation was that the energy density would be at its maximum possible value. The consequence of this would have been a universe that was homogeneous to a high degree because any departure from homogeneity would imply that the density at one point was different from the density somewhere else. Since the energy density is directly related to the curvature and the scaling, these too would have been homogenous. Subsequent to the inflation, the universe would have remained homogeneous on large scales because there was no mechanism by which the homogeneity could have been disrupted. Each small region of the universe evolved without communication with any other region and since the physics was the same everywhere, the regions evolved in lockstep. The universe remained homogeneous precisely because of the lack of communication.
Almost, because we must allow for differences in the energy that would have resulted from the uncertainties at the time the inflation ended. Initially, the energy of each Plank-sized region was nominally the Plank energy with an uncertainty equal to that same value. By the end of the inflation, however, the un- In order to determine the spatial characteristics, we examine the expectation value of the density, or any other parameter, at two different points, Thus, we see that the distribution is scale-invariant for points further apart than one Plank length. What that means is that the fluctuations of the universe as a whole do not get smoothed out as the communication regions expand even though the internal fluctuations within each such region will, to some extent, get smoothed. In fact, we will see in a later section of this paper that it is this variance that responsible for the large angle CMB power spectrum.
You will note that we have not said anything about the smooth structures that we have been insisting must also have existed. In fact, as we will see, these were of very small amplitude and so their existence does not alter the general picture presented here.
The next significant step in the evolution was the creation of ordinary matter but before we get to that, we will present the full metric along with its solution and examine more carefully how a homogeneous and isotropic universe can be reconciled with time-varying curvature.

Time-Varying Curvature
This section is devoted to the problem of understanding the evolution of a universe in which the curvature varies with time. One of the significant consequences of time-varying curvature is that we must distinguish between the universe as it actually is and as it is perceived by an observer.
When we speak of the universe as it actually is, we are speaking about such characteristics as the curvature and scaling of a sequence of spacelike hypersurfaces which are described by the 3-space portion of the metric and which exist outside the context of Einstein's equations. When we speak of the perceptions of an observer, on the other hand, we are speaking about the capture of signals that originated at some point in spacetime and then passed through a sequence of such hypersurfaces to reach an observer at some later point in time. It is these signals that are described by Einstein's equations. In an FRW universe where the curvature is time-invariant, one can, for the most part, ignore this distinction.
With time-varying curvature, however, this distinction is important and has numerous consequences that must be considered.
We will begin with a review of the formalism defining homogeneous and isotropic hypersurfaces. This will be mostly familiar ground but since everything that follows is dependent on these ideas it will be useful to make sure we have a Journal of High Energy Physics, Gravitation and Cosmology common starting point. Referring to e.g. [1], chapter 5, we find that on any such hypersurface, symmetry arguments require that the spatial portion of the Riemann tensor must have the following form where the curvature K is a constant (on that hypersurface). Given this fact, it then follows that the spatial portion of the correct metric must have the following form (see e.g. [2], chapter 14.) We emphasize this is a statement about each spacelike hypersurface and that it follows from symmetry arguments alone and has nothing to do with Einstein's equations. It is also important that even though this expression involves the radial coordinate, r , there is no notion of a preferred origin. All points in the hyperspace are equivalent.
The scaling, like the curvature, is a property of the 3-space so by our assumption of homogeneity and isotropy, it, like the curvature, must also depend only on the time.
Without providing the proof, it follows from the equations that to avoid singularities at 0 r = , h  must be proportional to r and ( ) Next, the work of [3] allows us to replace ( ) , q ct r with the form 1 , q ct r k ct r h ct r a ct = −  where we have defined ( ) ( ) , , h ct r rh ct r =  . To avoid another singularity, it happens that the radial derivative of h must also vanish at 0 r = .
We now note that redefining the radial coordinate as minus itself leaves the metric unchanged from which we can conclude that h is an even function of r, or in other words, a function of r 2 rather than just r. The first derivative then automatically vanishes at 0 r = thus satisfying the various conditions. The same argument applied to the energy-momentum tensor shows that the energy density and pressure are also functions of r 2 so the first derivative of the pressure also automatically vanishes at The final metric is then a r h kr S ct r kr The expansions of the Ricci tensor components are long so we won't write them out here. (By long, we mean that the some of the expansions contain well over 300 terms).
In addition to these equations, we have the two equations that follow from the conservation condition, . Like the Ricci tensor components, these are also rather long. Symbolically these are As we noted earlier, there is a difference between the "real" universe as a sequence of hyperspaces and the universe as perceived by any observer. At any moment of cosmic time, the universe consists of one, single hyperspace which is characterized by its curvature and scaling. There is no notion of time or location on this hyperspace because time is the same everywhere and all points are equivalent, i.e. there is no preferred origin. Also, because a hyperspace exists at a single moment of time, signals within a hyperspace are impossible and thus, an observer placed on such a hyperspace would not be able to say anything about that hyperspace because his or her hyperspace is unobservable.
Observer's do receive signals, of course, but what they are observing are signals arriving from previous hyperspaces. It is these signals that constitute the observer's perception of the universe and it is these signals that Einstein's equations describe. In other words, Einstein's equations describe any observer's perception of the universe in terms of his or her time and radial coordinates. A different observer would have a different perception, even though they exist in a single universe, and the relationship between these is also fixed by the equations.
We have then a "real" universe consisting of a sequence of homogeneous hypersurfaces which is overlaid by non-homogeneous observer perceptions which are unique to each observer. Since the equations are dependent on a metric, it follows that any conclusions drawn from observations about the expansion of the universe, for example, are totally dependent on the choice of metric since any observer's perception is dependent on the all the intervening spacetime between Journal of High Energy Physics, Gravitation and Cosmology the observer and the observed object.
A key point is that with time-varying curvature, these field equations are functions not of just time but also of the radial coordinate, i.e.

( )
etc. The question, then, is how do we interpret these field equations, which certainly do have reference to an origin ( 0 r = ) and which do describe signals, in such a way that they describe hyperspaces which have neither. The resolution of this dichotomy comes when we realize that each hypersurface is just the set of all possible observer origins and since all such locations are equivalent, any one observer's field equations will comprise the field equations of the hyperspace as a whole when evaluated at that observer's origin. Thus, the field equations that replace the FRW field equations follow not from equations which are free of the radial coordinate, as is the case with the FRW metric, but from the 0 r = limit of the more general field equations which are dependent on the radial coordinate. We can conclude then that Einstein's equations, which are concerned with signals, make contact with an observer's hyperspace only in the limit of signals of zero extent.
The curvature is a property of the hypersurface and so must relate to the energy density and pressure of that hypersurface so (5-3) now becomes where the quantities on the RHS are evaluated at 0 r = and 1 2 , k k are constants yet to be determined. As we noted earlier, this equation is, in a sense, a replacement for the "equation of state" of the FRW model but with the difference that, in this case, it falls out as part of the solution rather than being introduced as an ad hoc assumption.
Starting with the above equations and then taking the limit of 0 r → , we obtain the following equations. (Note that we are switching to the Mathematica notation which is more compact that the standard notation for partial deriva- Our original six equations have thus been reduced to four. An important simplification has occurred because none of these contain spatial derivatives.
We will now set about solving these equations. We first subtract the 2 nd equa- We next solve ( . (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18) This constitutes the proof that the relationship between the curvature and the energy density and pressure that we have been hinting at is an exact conse-quence of time-varying curvature. Comparing with (8-11), we find that 1 2 2 h k k γ = = which means that the curvature is dependent only on the sum of the vacuum energy and the pressure. In fact, we will later discover that all physical quantities depend only on that sum. (We note that no such relationship exists in the standard model because it does not contain the necessary off-diagonal component in the metric; i.e. in the FRW model, 0 h γ = ). As a matter of terminology, instead of always writing out "vacuum energy density plus the pressure", we will often use the shorter "vacuum energy density" which we intend to mean the same thing.
Based on our original contention that 1,2 k were constants, it follows that h γ will also be a constant. That, however, is still an assumption. Nevertheless, proceeding with that assumption, we will find a complete solution of the equations which demands that indeed, h γ is a constant.

That begin the case, [ ]
,0 h ct is also a constant and it follows that the RHS of (8)(9)(10)(11)(12)(13)(14) vanishes. We now assert that a substitution of reduces the equations to a single non-linear differential equation What we find is that with time-varying curvature, there must be an acceleration of the scaling. We emphasize that this is a prediction of the model. In con-Journal of High Energy Physics, Gravitation and Cosmology trast, the standard model does not actually make any prediction at all. Instead, the so-called prediction of an accelerated scaling results from curve fitting rather than from any fundamental constraint imposed by the structure of the model. Put another way, the standard model claims an accelerated scaling after the fact of the luminosity distance observations whereas the new model predicts an accelerated scaling without any reference to luminosity distance or any other observation.
We now wish to fix the unknown constants. It will be useful to make two additional definitions. First, we define the constants In terms of these, we have We see that the scaling is power law for For 0 1 8 k = , which is a value we will explain shortly, we have 0 1.414 We note that this result supports our earlier assumption that h γ is a constant.
The resulting curves for the scaling parameter and the scaling are shown in The effective scaling parameter is essentially constant up until about 1% of the present age of the universe and then gradually approaches an exponential with increasing time. The middle chart shows the actual scaling and the lower shows the last two decades in more detail. We see that even though the effective scaling parameter is increasing rapidly, the actual scaling does not differ greatly from 2/3 rds scaling over that time range.
Next, we show the Hubble parameter in Figure 4. We see first that the Hubble parameter increases with increased look-back time (or redshift.) It is a constant power law curve for times earlier than 0 0.1 ct ct < but is very non-linear for more recent times.
We now turn to the energy density and pressure. The first thing we note is that there is a constant contribution to both reminiscent of a cosmological constant with a value of 6.8 × 10 −10 J•m −3 . This contribution, however, has no physical significance and just amounts to a redefinition of what we mean by zero energy and pressure. A constant energy or pressure everywhere is the same as no energy or pressure at all. In any case, we could simply eliminate it by adding a Journal of High Energy Physics, Gravitation and Cosmology  Ignoring this constant value and using the scaling parameters just determined along with a value of 0 1.41 k = , we calculate the energy density and pressure shown in Figure 5. The time-varying curvature thus predicts that there is pressure and that both it and the energy density vary as 2 t − up until shortly before the time of galaxy formation at which point, the pressure begins a rapid decline and eventually becomes negative.
Returning to the point about physical quantities, we now want to consider the motion of a test particle with 4-velocity The particle's geodetic equations are given by The significant point here is that the connection coefficients are dependent only on the metric functions and because ( ) , h ct r is the solution of a differential equation that does not contain either the vacuum energy or pressure separately, it, like the curvature, is a function only of the sum of the energy and the pressure. The result is that the motion of the test particle is, in turn, only dependent on that sum. Thus, while we can talk about the vacuum energy and pressure as distinct quantities, only their sum is of physical significance. If we actually work out the geodesic equations, we find that the ct and r equations are rather long. The angle equations, however, are short. The θ equation with a similar equation holding for u ϕ . We see now that if at any point on the test particle's trajectory, 0 u u θ ϕ = = , the corresponding velocity derivatives vanish so on any such trajectory, the angles are constant which is a reflection of the lack of off-diagonal metric components connecting the angle and time coordinates.
At this point, we will pause to compare two predictions with currently accepted values. First, we note that the predicted present-day value of the sum is which differs from the currently accepted dark energy density (6.3 × 10 −10 J•m −3 ) by no more than a factor of 3. We can also compute the total energy to find, 70 total 7.5 10 J E = × which is smaller than the value in (2-1) by a factor of about 4.
We thus find that the vacuum energy as determined by the exact solution of Einstein's equations can account for two of the properties of spacetime that are considered mysteries in the standard model.
The radius of curvature, defined earlier in (5-1), is which varies linearly with time as we determined earlier.
At this point, we need to raise an issue concerning ordinary matter. The solution presented is correct and in particular, the predicted scaling is correct. What is missing, however, an understanding of the contribution from the ordinary matter. This potentially becomes significant during the latter stages of the evolution of the universe but we need further development before we can address this issue.
We will now establish an upper limit on 0 k . Referring back to (8-26a) and (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27), we see that for any γ * , there is maximum value of 0 k above which there is no value of h γ that can realize that value of γ * . This condition is expressed by the requirement that (8-29) must yield a positive, real number and thus the limiting value of 0 k is given by the vanishing of the radical which with 0.5 which explains the value we have been using. Later, in fact, we will give observational evidence that 0 k , and hence, 0 k , always has this maximal value and thus is not an adjustable parameter but instead is a prediction.
We next wish to make contact with our inflation model. Earlier, we asserted Journal of High Energy Physics, Gravitation and Cosmology that the self-interaction of the curvature would result in a slower than 2/3 rds expansion which we have now found to be the case. We can now extrapolate backwards to the inflation to determine the change in the cutoff necessary to account for the different scaling. Without going through the details, by re-running the inflation simulation, we determine that the end of the inflation occurred at a value of 4.2 I τ ≈ . Also, by comparing the energy density at the end of the transition with (8-24c), we obtain a value of 0 6.9 k ≈ . Even though this is larger than the allowed limit, this is really a remarkable result because we are tying together the two ends of the evolution of the universe. At the end of the inflation, the energy density was equal to the Plank energy density and during the transition phase, the density dropped by a factor of about 10 −6 and yet this simple model of the transition yields a curvature that exceeds the upper limit by less than a factor of 5. The adjusted inflation is shown in Figure 6.
This result also suggests that as a general principle, the curvature always has its maximum possible value and we will later find evidence for this when we examine the luminosity distance data in the next section.
r h ct r rh ct r F ct r q ct r r which is the integral form of a nonlinear differential equation for r. In this case, r is defined with respect to the source so ( ) , 0 e e r ct ct = . The present-day redshift is given by Note that these reduce to the FRW formulas when 0 h = and constant k = .
You will recall that according to the convention we adopted, the radial coordinate is dimensionless and specifies any location as a fraction of the scaling. It thus has the limits of 0 1 r ≤ ≤ . We also see what appears to be a singularity in the metric, , at ( ) 1 r k ct = whenever 1 k ≥ which with the maximal curvature will always be the case. We can now understand the nature of this apparent singularity. From (8-35a), it appears that the time interval corresponding to an infinitesimal increase in the radial coordinate becomes infinite at that value of r. In other words, for sources at or beyond that coordinate limit, photons would require an infinite amount of time to reach the observer which means that they are not visible. Thus, although any observer would know that there must be a universe lying beyond this horizon, the field equations describing the observer's perception of the universe only retain validity out to the limiting value of ( ) ( ) The corresponding actual proper distance would be since the appropriate scaling is that of the hyperspace at time coordinate, ct.
We will refer to this as the horizon distance to avoid confusion with other definitions of related concepts. This brings us full circle back to the radius of curvature of (8-33); the horizon distance and the radius of curvature are the same thing. The meaning of this distance is that it is the proper distance between a source and observer (at time t) that are just beyond the limit of being able to influence each other assuming that each emitted a signal at time 0 t = . As we will see shortly, however, this result is an oversimplification and the actual limit on our ability to detect distance sources is slightly less. The horizon distance is a different concept than the limit on communication since the latter requires an exchange of multiple signals within a meaningful period of time and so is much smaller.
We noted earlier that because the metric components are functions of both t and r, the universe will not appear homogeneous to an observer even though Journal of High Energy Physics, Gravitation and Cosmology each hyperspace is homogeneous. It is natural then to ask to what degree and in what manner will the universe not appear homogeneous. To answer this question, we will calculate the radial coordinate and redshift using the above equations. Rearranging (8-35a) and introducing the dimensionless time variable, which can be solved using the 4 th order Runge-Kutta method. Working from the point of view of the source, the initial condition is ( ) , h ct r so for the moment, we will assume that it has the constant value given by (8)(9)(10)(11)(12)(13)(14)(15)(16)(17).
In the following figures, we wish to compare with standard model results. Because in the latter case, a redefinition of the radial coordinate is usually done, we cannot easily compare with results in that formulation. Instead, we just compute the curves with k set to a constant value of 1 k = . The time-varying curvature solutions are shown in red and the constant curvature solutions in black.
In Figure 7, the curves are the locus of the radial coordinates of sources that emitted signals at the indicated time that were later received by an observer at the present time.
In Figure 8, we show the computed redshifts for the same set of parameters.
Also shown in blue are the redshifts calculated using the FRW lookback time.
where we have used    For comparison, we also show the FRW formula for two values of ( ) What we find is that there is a considerable difference between the exact and FRW results.
In Figure 10, we compare the scaled angular distance from the two models. In the FRW case, the angular distance is given by Journal of High Energy Physics, Gravitation and Cosmology In the exact case, it is given by where the scaling is given in (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27) and the coordinate distance and redshirt are shown in Figure 7 and Figure 8. We again see that there are differences which very likely have a bearing on the current difficulties in trying to fix the Hubble constant.
We now turn to small values of s r . We noted earlier that we must have 1 ,0 0 h ct = in order to avoid a singularity. We will now examine the equation for small s r to show that this is indeed the case.
First, we write   To get a sense of when these effects will begin to be important, the curvature has been decreasing since the initial inflation but at the time that effective scaling equals unity, the scaling will begin to outrun the influence of the Big Bang and the curvature will start to increase. The point at which that will happens is which on cosmic times scales is just around the corner.
In summary, we have presented a new model of the expansion of the universe that provides a good match to observations. The only parameters that appear are suspect because they were obtained using an invalid "ruler".

Luminosity Distance
We will turn to the issue of the ongoing observations of the luminosity "distance". In 1998, Riess, et al., [4], reported observations of type 1a supernovae that, when interpreted in the context of the FRW model, suggest that there was an observable acceleration of the scaling for values of 1 z ≈ which in turn sug-Journal of High Energy Physics, Gravitation and Cosmology gested the existence of a cosmological constant. Later, in 2016, Nielsen, et al., [5], published a new analysis of a much larger data set that cast doubt on the original conclusions. In this section, we will review the data and its FRW interpretation and then consider the situation in light of the new model.
We begin with the definition of the luminosity "distance" of some source. We put "distance" in quotes because luminosity "distance" is not a distance at all but instead is a model dependent construct that happens to have the dimension of length. Its usefulness is that it can be both measured and calculated thus allowing theory and observation to be compared. The definition is where L is the absolute luminosity of the source and F is the energy flux arriving at the Earth. Observationally, this quantity is determined by measuring the flux received from a multitude of sources at different distances that are known to have the same absolute luminosity. To calculate this quantity, we start with the formula for the arriving flux which is ( ) where A is the area of the sphere centered at the source. In this formula, there are two factors of ( ) 1 z + . One of these is the result of the photons being redshifted because of the expansion of the universe and the other is a consequence of the fact that the arrival rate of the photons is also reduced by the expansion.
Substituting into (9-1) gives ( ) Note that the absolute luminosity cancels when calculating the distance. So far, this formula is valid for any metric. It is when computing the area that the metric becomes involved. For the metric of (8-6) the area is     Figure 15 and must not be considered as accurate representations of the data.) separately.
What we see is that the last four data points indicate that the upward curvature of the data for large redshifts in less pronounced than indicated by the single Riess data point thus casting doubt on the conclusion concerning a cosmological constant. It is also apparent from this graph and the previous one that the deviation of the single Riess data point from the "no acceleration" line is not unlike the scatter in some of the other data points at smaller redshifts; for example, at 0.43 z = . Keep in mind too that a larger distance modulus just indicates that the radiation is dimmer than expected and so could be the result of some un-Journal of High Energy Physics, Gravitation and Cosmology identified mechanism that absorbed or scattered the light along the light of sight to that particular source. Comparing the two model results, we see that the larger 0 k curve gives a noticeably closer fit to the data and from this we conclude that the curvature is a large as it can be so from this point on, we will take it as a general principle of the model that the curvature always takes on its maximum possible value. Thus, We will now turn to the problem of the origin of ordinary matter and the CMB. That, however, is as far as one can go with respect to observations. What this means is that, at least with respect to nucleosynthesis, everything leading up to that initial population of neutrons and protons is conjecture. In other words, there is no observational evidence that the standard inflation/ quantum field theory model of the pre-nucleosynthesis period actually happened.

Asymmetry, Ordinary Matter, and the CMB
In this and the next section, we will present an alternative model that leads to the same nucleosynthesis starting point and in addition, accounts for the matter/ antimatter asymmetry of the universe.
We will first establish some basic parameters that will give us a framework for the arguments that follow. The matter in the universe, as is well known, exists in long relatively thin filaments of galaxies which contain about 94% of the mass of the universe and which surround voids that make up about 80% of the volume of the universe and contain the remaining 6% of the total mass. Simulation results are shown in Figure 18. As an aside, we will show in Sec. 16 that this filament structure is a consequence of vacuum energy structures that originated during the initial inflation.
First, we must separate the 20% of the volume that contains 96% of the matter from the voids. In the latter, the density is Figure 18. Cosmic web, Nasa [6]. In order to create these particles, the energy density of the source must have been at least as great as their rest mass which, using a neutron as the architype, is 1.35 × 10 44 J•m −3 . Equating this to the vacuum energy density, (8-24) will allow us to fix the point in time, denoted by n t , at which the primary particle creation must have ceased; that is, provided we know the scaling parameter. This, however, we can determine from the energy density of the CMB.  We can now calculate the various quantities of interest assuming a present-day particle density of 2 m −3 in those regions where nucleosynthesis was significant.
The results are shown in Table 1. Of course, we haven't created the particles or radiation yet but these will be their densities when we do.
Looking at these numbers, we see that the radiation energy is about 0.1% of the vacuum energy density and that the particle energy density is vastly smaller even when their rest mass is included. This clearly reinforces the idea presented earlier that the scaling of the universe is entirely a consequence of the time-varying vacuum energy density. We also see that the temperature is about a factor of 10 less that the standard model temperature and that the ratio of particles to pho-  We now want to characterize the possible scenarios leading up to the starting point of nucleosynthesis. The problem is to not only to account for the values just discussed, but to account for the matter/antimatter asymmetry of the universe. Since we start with a vacuum and end up with both particles and radiation, there are three possibilities as shown in Figure 19. What we will show is  Figure 19. Possible nucleosynthesis scenarios. Journal of High Energy Physics, Gravitation and Cosmology that a scenario of type (a) cannot explain the matter/antimatter asymmetry.
Scenarios of type (b) could explain the asymmetry but suffer from a number of problems that render such a scenario as very unlikely. This leaves the last type as the one most likely to be correct. We want to emphasize that the big jump is to go from vacuum to matter or, in other words, from nothing to something.
Whether the something is radiation or particles is really a secondary issue since we have no idea of how the vacuum could accomplish either. We can only say with certainty that it did happen.

Scenario (a)
The standard model is an example of this type in which it is assumed that vacuum energy underwent a transition into radiation that eventually transitioned into the mix of particles and radiation via processes described by quantum field theory.
The main point in this case is that photons are matter/antimatter neutral so even if the vacuum had an asymmetry, such an asymmetry could not have been imprinted on the radiation. Likewise, quantum field theory is also matter/antimatter neutral, at least at a level that can be detected via experiments, so it follows that there is no mechanism by which an asymmetry with a single "sign" could have been created on a large scale.
We might imagine, however, that locally some asymmetry could have been introduced via random fluctuations. Here now is an essential point; because of the finite speed of light, there was no communication over distances larger than 10 4 m at time n t and thus correlations of matter vs antimatter could not have extended over any region larger than that dimension. Further, the state of each such cell would have been random not just with respect to its "sign" but also with respect to its percentage of asymmetry.
We can consider two limiting cases. In the first, let us assume that each entire × . If we assume that the initial energy density of the particles was the same as that of the radiation, we would have started with a particle density of roughly 10 41 m −3 so the final density would have been no greater than 10 25 m −3 which is vastly smaller than the value of 10 33 m −3 indicated by the present-day particle density.
The other limiting case is that in which matter and antimatter particles were created at random. In that case, annihilation would immediately have reduced the density within each cell to a value no greater than The conclusion is that no scenario such as the standard model that begins with radiation will be able to explain the asymmetry.
We also feel that the field theory model has additional problems. For one, it is just too complicated. It is supposed that the request neutrons and protons were the result of a scenario in which radiation evolved into quarks and gluons and then into baryons and leptons all in a time period of less than 10 −5 s. It wasn't until a time of 10 −24 s, for example, that information could have traveled across the dimension of neutron which places severe limitations on any sort of cooperative interaction. Another problem with the quark plasma idea is that such a process would require three-body reactions which are notoriously slow. The strong force is short range so in this case, 3 relativistic quarks of the correct type would have had to simultaneously occupy a volume no larger than a neutron and with relative velocities small enough that a reaction could take place. With random distributions and velocities, such a condition is extremely unlikely so the rate of binding into hadrons would have been extremely small. There is also the problem of explaining how the required numbers of each quark type could have randomly formed out of the radiation with no quarks left over.

Scenario (b)
The second scenario assumes that the particles and radiation coalesced simultaneously directed out of the vacuum at a time at or near n t t = . The asymmetry problem can be solved in this case but it suffers from the lack of a mechanism that could account for any particular mix of protons, neutrons, and photons necessary for the subsequent nucleosynthesis. In other words, it is too complicated to be correct.

Scenario (c)
In this case, it is assumed that particles coalesced out of the vacuum without any initial accompanying radiation. It further simplifies matters considerably if only a single particle type was created with the obvious candidates being neutrons and/or antineutrons.
Suppose for the moment that spacetime had the property that it could only form neutrons or antineutrons but not both. The asymmetry problem is then solved by fiat and it is also possible to account for the radiation as being the re- A second option is that both neutrons and antineutrons were created in nearly equal numbers. In this case, the source of the CMB radiation was annihilation.
Initially, each such photon would have had an energy equal to 939 MeV but these would have evolved into a thermal spectrum as a result of scattering off the Journal of High Energy Physics, Gravitation and Cosmology charged particles that soon came into existence. In order to account for the radiation energy density, the initial number of original particles must have been What we learn from all this is that no symmetric random process can account for the present-day particle density of matter so we conclude that the process that initiated the existence of matter must have been a biased random process and the only agent that could have been responsible for that is the vacuum.
Going further, the action of this bias must have manifested during the creation process of the primary particles because all the subsequent reactions are matter/antimatter neutral.
Let us assume that in the creation process, the probability of creating a neutron is p and an antineutron is q. From the theory of a biased random walk (see e.g. To be clear about this, the bias must have been the same, or nearly the same, everywhere in order for the end result to have been either all matter or all antimatter rather than a mix. We also see from the very small size of the variance relative to the number of particles that all the cells would have finished up with the same number of particles. This is significant because nucleosynthesis proper is sensitive to the initial particle densities. What we find is that a very small asymmetry in the "fabric" of the vacuum can account for the necessary matter/antimatter asymmetry and further, there does not appear to be any other mechanism that can account for it. This is the first indication that the structure of the vacuum is far more complex than is generally thought. Going back to the standard model, now that we know the magnitude of the bias, we can ask if there could there be such a bias in quantum field theory of scenario (a)? The answer to that is no because, although the bias is small, it is not so small that it would have escaped notice in present-day experiments. Further, in order for such a bias to manifest itself, the scenario would have to follow Journal of High Energy Physics, Gravitation and Cosmology along the lines of the "only neutrons" model. A small bias in the field theory cannot directly account for the very small particle/radiation ratio since it would require a huge bias to create nothing but matter. Thus, the initial radiation would have had to have first converted almost entirely into matter and antimatter with populations reflecting the small bias followed by subsequent annihilations that would have rebuilt the radiation and further, this small bias would have to be the same everywhere.
Having proposed that a slightly bias spacetime can account for both the present-day density of particles and the CMB, we next need to demonstrate that an all-neutron/antineutron beginning can account for the subsequent formation of the light elements.

Neutron Nucleosynthesis
In this section, we will examine a model of nucleosynthesis based on the idea that neutrons and antineutrons formed directly out of the vacuum energy of spacetime. Surprisingly, there is actually a hint that this idea has merit from the results of experiments conducted over the last 25 years that are attempting to nail down the lifetime of free neutrons. The article by Greene and Geltenbort, [10], provides a concise review of the situation. These experiments are of two types. One is known as the "Bottle" approach and the other as the "Beam" approach. The "Bottle" approach measures the lifetime by counting the number of neutrons remaining in a "Bottle" as a function of time. This approach makes no attempt to identify the decay products or even the mechanism of the decay. The "Beam" approach, on the other hand, counts the protons that result from the expected β decay of the neutrons. What is known as the neutron enigma is the fact that neutron lifetime measured by the "Bottle" approach (878.5 s) is a bit shorter than that measured by the "Beam" approach (887.7 s) which indicates that there is some as yet unknown decay path that allows roughly 1% of the neutrons to simply disappear without leaving behind a proton. Since a neutron cannot decay into any other baryon, it would seem that the decay violates the conservation of baryon number along with a few other conservation laws. But a violation of the conservation of baryon number is exactly what is needed to account for the bias that is needed to explain the matter/antimatter asymmetry.
In this new model, the particles and nucleosynthesis reactions are, of course, the same as those of the standard model but the initialization process was quite different. We also must recognize that the standard model seems to give a reasonable account of the final particle distributions. This means that the new model must account for a similar distribution of particles and radiation going into nucleosynthesis proper.
We start with neither radiation nor protons so the first problem is to account for their existence. We have already asserted that the radiation was the result of annihilation but we must also account for a significant number of protons. The to write down. The basic rate equation for any particle can be written as where the terms on the right are sums over the reaction rates that increase and decrease the count of particle "i" respectively.  (11-4b) where , One of the nice features in the thermal case is that the reaction rate formula provides a clean separation between the lab and CM reference frames. For non-thermal particles, things are not quite so tidy. Our starting point is the usual reaction formulation with a Maxwell-Boltzmann distribution for the thermal particle and an unknown distribution function for the fast particle,  Here, ν is the magnitude of the relative velocity of the reactants. The first simplification we make is to ignore reactions between two fast particles. The population of fast particles will generally be smaller than the density of thermal particles so this is a reasonable approximation especially considering the fact that very few of the reactions could involve two fast reactants. With this restriction, since the fast particle velocity will always be much larger than the thermal velocity, we can drop the thermal contribution to the relative velocity which allows the integrations to be separated. With a change of variable, the rate becomes The next step is to establish some sort of model for the fast particles. Obviously, we can't track the actual velocities of the particles so instead we divide the energy range of each type of particle into a number of bins and consider each bin to represent a single particle type that has a single fixed energy. This is equivalent to assigning to each type of fast particle. a distribution function which is constant within its bin and zero everywhere else. As nucleosynthesis proceeds, the numbers of each type of fast particle will change but their energies will not.
With this approximation, we have is the number density of fast particle i. The rate equation for each particle type is then is the bin-averaged cross section. For photons, a similar argument yields The principal difficulty with this model that there is no clear separation between the lab and CM energies as there is when both particles have Maxwell-Boltzmann distributions. We will generally think of the nominal bin energies as CM energies and try to adjust to lab energies when possible but this cannot be easily done with any rigor because such a transformation for the energies of the outgoing particles would then be angle dependent which in turn would mean that we could not assign the outgoing particles to a single bin. The consequence of ignoring this issue is that the cross sections will be evaluated at energies that might differ by as much as a factor of 2 from the "correct" energy but since the cross sections vary slowly on a logarithmic scale and also since the definition of each particle is no better than the width of its bin, such an energy shift will not have a significant effect on the results. When a fast particle is one of the inputs to a reaction, we calculate the input CM energy of the reaction by assuming that the bin energy is the lab energy and using the normal kinematics based on the particle masses. For reactions with two output particles, we determine the output energies using the normal two-particle CM kinematics and then allocate each particle to the bin corresponding to its energy. With three output particles, we calculate the maximum possible energy that each particle could have and then allocate that particle to the bins assuming that each particle has a uniform spread of energy from its maximum value down to zero. Of course, when we speak of a particle, we are actually talking about a huge number of particles of Journal of High Energy Physics, Gravitation and Cosmology any given type.
As a practical matter, considering the Q values of the reactions and then allowing for the input kinetic energies of the fast particles and energetic photons, we found that fast neutron and proton energies would reach 18 MeV, that alpha particle energies would reach 10 MeV, and that photon energies would reach 35 MeV. The number of bins for each type is somewhat arbitrary. Enough are needed to give reasonable distributions but not so many as to create excessive numerical work or place too great a strain on our lab/CM energy blurring. We found after a few trials that 12 bins each for the neutrons, protons, and alpha particles and 16 bins for the photons seemed to be a reasonable compromise.
We set the low end of the fast particle energy range to be 0.3 MeV and decreed that any particle whose energy dropped below that value was henceforth a thermal particle. The results are not sensitive to the exact value as long as it is not zero. Finally, we tried two models for the bin widths. In one case we used a linear scale so the bins had equal energy widths and in the other case, we used a log scale so the bins had equal widths when plotted on any of the log-log cross section plots. Trials showed that the results were not particularly sensitive to the choice but since a logarithmic pattern better matches the cross section data, that was the option we chose to use.
Having dealt with the model, we will now turn to the cross section data. In Table 2 that follows, we list the reactions that were included in this model along with their Q values. Note that we have not included any particles with atomic numbers greater than 7. Although we attempted to locate as many cross sections as possible, in many cases, it was necessary to use reaction rate formulas (replacements for (11-4b)) directly. This was not a restriction for the thermal simulations but was a serious hinderance for the "fast" particle simulations because those require knowledge of the cross sections. References to the original sources of the rate formulas are generally given although in 3 cases, we were not able to access the original source so instead took the formula directly from the BBN code.
The ID in the first column is just a reference number that will allow us to refer to any particular reaction. The "Refs" column lists the references to the cross section and rate formula data. The CS and RF columns indicate whether or not we had cross section and/or rate formula data and the last column indicates whether or not the reaction is included in the standard BBN simulation. The reactions in which we had both cross section and rate formula data allowed us to verify our calculations of the reaction rates. The results are generally in good agreement although in some cases, we did find some differences in detail.
Because of the large reaction rates and the fact that number densities of the different particle types vary by many orders of magnitude, the equations are stiff and cannot be solved by using the standard Runge-Kutta methods. Instead, we used a predictor-corrector solver known as "Lsoda." This solver was developed over a period of time at the Lawrence Livermore Laboratory several decades ago. Journal of High Energy Physics, Gravitation and Cosmology It was originally written in Fortran but later was ported to the "C" language and both of these versions can be found on the internet. For our purposes, we ported it again to the Microsoft VB.Net platform. This solver has a number of essential features. It automatically switches between Adams-Bashford and Gear Stiff equation methods and automatically adjusts the step size and method order at each step. Each type of particle requires an equation so with the bin choices discussed earlier, we end up with 60 simultaneous equations when the fast particles are included. There are no equations reflecting a dependence of the scaling on the radiation or particle densities because, unlike the standard model, the scaling is entirely determined by the vacuum energy density. The critical reactions that regulate the initiation of nucleosynthesis proper are reactions 2 and 20 and the process could not begin until the reaction rates for the two become approximately equal. The cutoff for the breakup reaction is at an energy of 2.2 MeV. Equating this energy to kT gives a time of but because of the very small particle/photon ratio, the actual beginning of nucleosynthesis occurs somewhat later. Once the thermal photons dropped below this cutoff, they ceased to have any effect on nucleosynthesis.
In Figure 21, we show the results obtained with thermal particles only and with a present-day particle density of ( )   The ratio is 2.8 which agrees with the known disparity between the BBN results and observation. This is a strong indication that the so-called lithium problem is simply a matter of not including a number of known lithium reactions in the simulation.
We ran simulations for a range of values of the present-day particle density and the best results seem to be obtained with the density in the range, ( ) In Figure 24, we show the results for nucleosynthesis in the voids. The average particle density is much lower than in the materal regions but it is not zero.
Using a present-day density of 0.016 m −3 , we find that in the voids, the protons make up essentially all of the total with the percentage of 4 He is less by a factor of about 10 relative to the higher density results. The fractions of the other particle types are generally somewhat larger although still very small.

J. C. Botke
We will now turn to the problem of the "fast" particles. We originally developed the "fast" particle model to study an "Only neutron" model (no antineutrons). Starting with only neutrons, it is not possible to get anything like reasonable results with just thermal particles. By including the "fast" particles, on the other hand, it is possible to get final particle densities something like the observed values. The problem with this model is that it is impossible to account for the CMB without at the same time ending up with a final total particle density vastly too large.
There is no doubt about the creation of such particles so the primary question is how fast do the "fast" particles thermalize? We won't be able to get anything like definitive results in large part because we are lacking the necessary cross section data. For many reactions, we don't have such data and for others, large extrapolations were necessary. Also, with "fast" particles, the inverse of many of the forward reactions become significant.
Nevertheless, we show the outcome in Figure 25 with first, no "fast" particle attenuation and second with 90% attenuation. In the latter case, the results are What these results do indicate is that the existence of the "fast" particles could have a significant effect on nucleosynthesis. The fact that these results don't agree with the standard model seems to indicates that thermalization is, in fact, very rapid but why that is true is not so obvious. First, a high percentage of the protons, neutrons, and 4 He are "fast" so scattering between these would not lead to rapid thermalization and second, there is a significant density of energetic photons also retarding the thermalization process. Our purpose in showing these results is to indicate that "fast" particles should be considered and that their importance should not be dismissed out of hand. A more careful study of the thermalization process would be needed to settle the question.
A final point concerning nucleosynthesis is that the initial particle density was not the same everywhere so the observed mass ratios are the result of an ensemble average over a spectrum of initial densities which should be incorporated into the model.

Solution Revisited
We made a point of saying during the development of Sec. 8 that the solution was correct but incomplete. The issue is the mass density of ordinary matter. We established that the present-day particle density is . The particle energy density is apparently larger than the vacuum energy density which indicates that it cannot simply be ignored.
To include this contribution in the equations, we need to add the particle density to (8-7) which then becomes The next step would be to solve the resulting equations but if we think about the solution given earlier, the physical quantities such as the scaling, the curvature, and the motion of test particles are functions of just the sum of the energy density and pressure and hence, adding the particle contribution to the sum does not change the solution for the physical quantities since the sum is fixed by Einstein's equations.
It is only when we set about separating the contributions the energies and pressure that the contribution of the particle energy becomes apparent. The pressure remains unchanged but what we previously called the vacuum energy density at any point is in reality, the sum of the actual vacuum energy density (plus the pressure) and the particle mass energy density.
Calculating this separation is easy because we know that the particle number density varies according to ( ) 3 a t − . The result is shown in Figure 26 which now replaces Figure 5. The "Total" and the pressure curves are unchanged but we see that in order to accommodate the particle mass density, the vacuum energy density Journal of High Energy Physics, Gravitation and Cosmology We are now in a position to refute the idea that the formation of galaxies began with small particle density fluctuations, random or otherwise, in an otherwise uniform distribution that is generally assumed to have existed subsequent to nucleosynthesis. The facts are that, as just mentioned, the motion of particles is dependent on only the sum of energy densities and that sum is fixed by Einstein's equations and is independent of the scaling. Thus, any small variation in the particle density in some region will result in an immediate change in the vacuum energy density sufficient to keep the total constant. The result is that the particles in any region will each experience a uniform gravitational field regardless of any particle density variations and hence will not undergo any sort of accumulation. Thus, the accretion model of galaxy formation initiated by small Journal of High Energy Physics, Gravitation and Cosmology matter density fluctuations is impossible.
Nevertheless, at some level accretion must have taken place but not nearly to the extent that is generally supposed. The accretion involved not just the particles, but the vacuum energy as well and the focal points of the accretion were the result of large-scale variances in the vacuum energy. We will have more to say about this later in Sec. 16 after an examination of the CMB spectrum.

Summary of Parameters
For the remainder of this development, it will be useful to have a summary of the various quantities we have been discussing. The scaling is given by (8-24)- (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27) with the following parameters The scaling curves are shown in Figure 3. Journal of High Energy Physics, Gravitation and Cosmology and is simply the size of cell, defined by the horizon distance at the time of neutron formation, scaled by the expansion of the universe. These cells, which we will call " n t " cells, provide a convenient unit for handling various calculations involving sizes and masses.
The next topic we will consider is the nature of so-called dark matter.

Dark Matter
Dark matter was originally proposed to explain the motions of stars and galaxies which cannot be understood solely on the basis of the gravitational field induced by the visible matter. Since that time, dark matter has become something of catch-all for any cosmic phenomena that can't be otherwise explained. In this section, we will show that the vacuum energy we have been discussing can account for these motions thus obviating the need for dark matter as a separate material entity. In another guise, the belief that dark matter is responsible for the filament structure of the cosmos has become popular. Later in Sec. 16, we will show that again, it is vacuum energy that is responsible. We can sum things up with the following statement, dark matter vacuum energy ↔ .
To make a beginning, we will consider the dynamics of spiral galaxies. In this manifestation of dark matter, the problem to be solved is the disparity between the observed velocity distribution of the stars (see, e.g. [25]) making up the galaxy and the motions calculated on the basis of the distribution of those stars. We get a hint towards the solution to this problem if we subtract the two curves yielding the curve C shown in Figure 29. This suggests that the observed velocity distribution can be understood in terms of normal gravitational motion being carried along by a rotating spacetime.
With this idea in mind, we will now turn to Einstein's equations. Given the distribution of matter and the motion of a spiral galaxy, it is reasonable to model such as galaxy with a stationary axisymmetric metric. The most general form is with an energy-momentum tensor of the form   are considered unknown, we will assume that the matter distribution is known leaving the unknowns to include the metric functions and the vacuum quantities.
For numbers, we will use the Milky Way as our example. The radius is There are really two issues to be addressed. The first is to explain the rotation and the second is to account for the stability of the particle distribution given that rotation. Taking the rotation problem first, any small volume of vacuum energy will respond to the curvature of spacetime the same way as does a material particle. The geodetic equations for such a volume are, All the connection coefficients vanish in the first 2 of these equations so these equations just state that the velocity components are constant, which they must be given that we assumed a stationary metric.
We find then that the vacuum energy is rotating as a result of inertial frame dragging. Actually, it would be more accurate to say the curvature is rotating but since all physical processes are a consequence of the curvature, it amounts to the same thing.
The geodetic equations for the particles will be exactly the same so the result will be, .  and putting these results together, we find that the curvature is differentially rotating and that the particles (stars or galaxies in the case of clusters) are at rest in that rotating curvature. The original motivation for dark matter was to supply the mass thought to be needed to prevent the orbiting stars and galaxies from flying away from their hosts. From this new point of view, there is no issue of them flying away because the stars and galaxies are at rest.
We next calculate the norm of the 4-velocity which with (14-7) included, be- A r ψ = . The metric at this point is now  and the energy-momentum tensor is given by  with everywhere that matter exists. Away from the dense regions, the total energy density will be given by just the vacuum but this won't be the asymptotic vacuum because the equations will prevent the total energy density from dropping immediately to its asymptotic value. The fact that there is a halo of stars outside the galaxy proper will also contribute to the total energy density and help to prevent a rapid drop with increasing distance.
At this point, we would normally solve the equations with the necessary boundary conditions to determine, among other things, the vacuum energy density profile. Unfortunately, we have not been able to accomplish this task with the tools we have at hand. We are up against the same problem we ran into in Sec. 8, namely that although Mathematica does have the finite-element functionality needed to solve non-linear PDE boundary value problems, it can only do so for certain quasi-linear class of equations and these equations do not fall into that category. In this case, we are also limited by the huge amount of computer memory needed for the finite-element mesh. This being the case, in order to proceed, we were forced into the use of a more limited analysis to establish the stability.
To achieve this, we will examine the problem from the point of view of Newtonian forces in which the galaxy is assumed to be surrounded by a torus of vacuum energy as illustrated in Figure 30 with the whole thing rotating with the curvature.
As shown in [27], to a good approximation the gravitational potential at a point on the galactic plane due to a circular torus is given by The force on a test particle at a distance r from the center due to the total equivalent mass of the torus is then Introducing two dimensionless parameters, The parameter ζ is not a measure of distance but instead defines the geometry. As ζ get larger, so does v R and hence, so does the area of the torus.
Finally, the equivalent mass of the torus is ( ) (14)(15)(16) Turning now to the galaxy, the disk accounts for most of the mass of the galaxy so for simplicity we will ignore the center bulge. From [28], the potential on the equatorial plane of a thin disk is given by Calculating the force on a test particle, we find Journal of High Energy Physics, Gravitation and Cosmology starting with a density corresponding to a present-day density of 2 m −3 and leaving behind a residue equivalent to 1 m −3 , a spherical volume with a radius on the order of 57 G R R = would have had to have been swept up and compared to this value, a torus radius of 2 -5 R G is quite small. We will show later, however, that accretion was not the primary mechanism by which galaxies were created.
What is most important, however, is that the required vacuum energy density is only about 1% of the equivalent energy density of the galactic matter.
To help clarify this picture, in Figure 32, we show a hypothetical radial distribution of the total energy density for a torus radius of 2R G . This curve would be part of the solution of the Einstein equations were we able to solve them. The horizontal axis in this case is the actual distance from the center of the galaxy.
The blue line represents the equivalent torus mass energy density using in the calculation and the curved red line is a hypothetical to illustrate the idea of a smooth decay.
What we have shown is that the necessary stability can easily be obtained and thus a rotating curvature can readily account for the velocity distribution of spiral galaxies. This solution also explains why so-called dark matter always hovers just outside regions containing matter. Vacuum energy exists everywhere but its density is not uniform as we have explained because it is subject to accretion just as is ordinary matter.
Turning now to galaxy clusters where the idea of dark matter actually originated, to the extent that such a rotating cluster can be treated is a rotating disk, we can apply the same formalism. The only parameter in the model is the mass ratio and from Table 3, we find that for galaxy clusters, this ratio is of ( ) 9 10 O − so we find that the required vacuum energy density is much smaller than in the spiral galaxy case and, in fact, is not significantly different from its asymptotic value. The fact that the required energy density is very small also allows abundant room for an adjustment of the geometric factor away from the thin disk model without the conclusion being affected.
We find then that the vacuum energy density can easily account for the observed rotation of galaxies and their contained stars and of galaxy clusters and their contained galaxies.
Dark matter is vacuum energy. Dark matter as a separate material entity does

CMB Spectrum
We have already discussed the origin of the CMB but didn't touch on its spectrum. In this section, we will show that the prominent features of the spectrum for angular sizes greater than 0.1˚ are a consequence of both the existence of superclusters, voids, and even larger structures on the one hand, and the energy uncertainty of the original Plank-sized regions at the end of the initial inflation on the other.
In Figure 33, we show the angular distribution of the CMB anisotropies from [29]. In the lower portion of the figure, we have enlarged a section of the distribution and added an 2˚ circle that gives a reference for the size of physical structures contributing to the spectrum.
For angular dimensions of 2˚ or less, the apparent features are consequences of physical structures. In the range between 2˚ and 45˚, the spectrum does not appear to be associated with any structure but is instead the consequence of the random, scale-invariant variance of the vacuum energy density which was set at the time of the initial inflation. We will refer to this as the Plank variance. The features with sizes of 45˚ and larger appear again to be related to actual structures.
In Figure 34 from [30], we see that the power spectrum consists of a flat region for angles between 6˚ and 45˚ of arc, a large peak centered at about 1˚ of arc and then a series of lower peaks extending to smaller angles. There is also a hint of a low peak beginning at 45˚ and extending to larger angles but the error bars are large.  The magnitude of the spectrum sets the relative temperature variance to be ( ) 5 10 all across the spectrum. In fact, because the spectrum is a proportional to the square of the temperature variance, the difference between the variance at the peak and that of the large angle portion of the spectrum is less than a factor of 2.5.
The peaks are strongly suggestive of physical structures so in order to understand these peaks it will be necessary to establish the connection between the size of such structures and the angular size of the resultant anisotropies. Recombination took place everywhere and the CMB radiation fills all space so it might not be immediately obvious what the interpretation of the angular distribution of the CMB might be. The answer comes from simple geometry and is much simpler than is sometimes suggested in the literature. A discussion of the rather overcomplicated FRW viewpoint is given in [31]. The fact is that we are observing light today that was emitted at time rec t by a spherical shell of spacetime centered at our location. If we could travel back in time to t rec , the universe would get progressively smaller but the angular position of all sources would remain unchanged. To fix the angular size of any particular structure, then, we only need to know at rec t t = , our distance to the shell of sources and their size. For the first, we use the results shown in Figure 7 which gives us the radial coordinate of a source whose light we are receiving at the present time. We see that at  As we travel back to the present, the sources get further and further away because of the expansion while their light travels towards us along paths of constant angle until eventually, we and the light arrive at our present location at the same moment.
We will now consider the present-day size of actual structures. Table 3 lists typical dimensions and in Table 4, the corresponding angular sizes are given.
Groups and clusters are roughly spherical in shape so their angular size will be representative of their influence on the CMB spectrum. Superclusters, on the other hand, are not spherical so the effective angular size of any particular structure will depend on its orientation relative to the line of sight to the earth.
On the other hand, there are a lot of superclusters so the orientations should tend to average out.
From the table, we see that galaxies and even clusters are far too small to have any impact on the spectrum within the displayed range of angles. Superclusters and voids, on the other hand, are large enough to account for the peaks and in fact, these are the only known structures that are large enough. We also see from the expanded portion of Figure 33 that the individual, well-defined structures are comparable in size to the largest superclusters which reinforces the same idea.
Of course, not even stars existed at the point in time that the spectrum was fixed so the structures we are speaking of are not their present-day manifestations. Instead, what we are detecting are precursor imprints in the vacuum that later developed into the present-day structures. In the next section, we will develop this idea further.
We will soon show that superclusters and voids do indeed provide a convincing explanation for the peaks in the spectrum but we should mention that there exists a commonly believed alternative which supposes that the peaks are the result of acoustic oscillations of the densities of photons and protons. In order for this to have happened, however, regions of space as large as superclusters would have had to repeatedly pass signals back and forth. A review of Table 3, on the other hand, shows that even the smallest supercluster was 5 times larger than any possible signal distance at that time so the largest angular-sized anisotropy that such a mechanism could account for would be no larger than a cluster and probably considerably smaller. The conclusion is that acoustic oscillations on the scale required to explain the first peaks were not possible.
The 2 nd and 3 rd peaks have roughly a harmonic distribution relative to the first peak which suggests that they are reflection of multipole distributions of tem-Journal of High Energy Physics, Gravitation and Cosmology perature variances within the superclusters and voids since even the 3 rd peak represents a size still much larger than the largest cluster. These peaks provide evidence that the temperature is nearly uniform over the expanse of the superclusters since if it wasn't, these secondary peaks would be much larger. We can also see this in the expanded portion of Figure 33 where a significant fraction of the 1˚ -2˚ sized structures appear to have a single temperature.
The same dimensional arguments apply to the voids with the only difference being that they are cooler than the average rather than warmer. They contribute in the same way to the anisotropy, however, because the spectrum is proportional to the square of the temperature variance. (15-5) The latter method seems less likely to be in error so the difference between these values suggests that the total number of superclusters/voids is closer to 10 6 than to 10 7 .
Using this number, we ran a number of simulations to determine how the CMB would appear if the temperatures of the superclusters were random. The results are shown in Figure 35. Each rectangle contains 10 4 superclusters. In the  first, the temperatures were selected at random with no spacing between the superclusters. In the second, the temperatures were heavily biased towards the blue and green again with no spacing and in the third, a random spacing between superclusters was introduced equal to 1/4 th of the size of the supercluster with the resulting voids filled with black. Of course, each rectangle is a particular sample but because the variance is on the order of 4.1 × 10 −3 , successive samples will appear much the same.
What we find is that none of these looks much like Figure 33. The second rectangle seems to give a reasonable representation of the proportions of temperatures but the distribution is clearly wrong. None of these shows any tendency towards the very large-scale clustering we see in the CMB. The conclusion is that the clustering of superclusters with a common temperature is not random which implies that there must exist structure on scales much larger than the size of a supercluster.
We will now turn to the details of the statistical analysis that leads to a description of the CMB spectrum. We begin by working out the spectrum of an ensemble of sources of some fixed size. Our starting point is the Fourier transform representation of the temperature spectrum of some source, The 2-point expectation value is (15)(16)(17) Note that this result depends only on the ratio, S/R. We now want to apply this result to superclusters. Using values from Table 3, we find that S/R must fall somewhen within the range of 30 to 257. Our procedure was to try various values until we found the value for which the peak of the calculated spectrum best matched the position of the 1 st peak of the actual spectrum. As the ratio is changed, both the position and, to some extent, the shape of the peak change. After a few trials, we found that a value of about 120 S R = which falls near the middle of the range seemed to provide the best fit.
We next want to plot the predicted curve but we must first take into account the flat, large angle background. It will become apparent later that source of this background with a value of about 830 extends to all angles so the peak is actually sitting on top of this. The final displayed value is then Table 5 gives the numerical values and Figure 36 shows the curve normalized to the peak value which, in this case, implies a temperature variation of We see that the resulting curve matches the shape of the observed peak reasonably well. The calculated curve is slightly broader than the actual peak which is probably a consequence of assuming a spherical distribution for the superclusters. A more detailed model would replace (15-9) with a non-spherical distribution and include integrals over the orientations in the various expectations. Journal of High Energy Physics, Gravitation and Cosmology  Figure 36. The predicted power spectrum after normalizing to the peak value.
At this point, we recognize that since this result is the spectrum of an ensemble of structures with a single, fixed size, the agreement with the observed spectrum is perhaps fortuitous because superclusters and voids exist with a range of sizes. That being the case, we need to calculate the spectrum for a distribution of sizes. In Figure 37, we display the size distribution of the compilation of 35 superclusters and 36 voids listed in [32]. This list, of course, is not definitive both because of on-going observations that add new structures and also because of the difficulties involved with measuring the dimensions of structures which are only hazily defined but it will be sufficient for our purposes. We have included only those structures from the list that consist of collections of galaxies. A few tentative larger structures are also listed which we have not included. Earlier we established Journal of High Energy Physics, Gravitation and Cosmology that there are around 10 6 superclusters of which about 10 4 contributed to the CMB so 35 is an extremely small sample. It is also worth noting that only those superclusters and voids with the correct redshift would have contributed to our observed CMB and that these particular observed superclusters would not be among those that did.
The indicated position of the first peak was taken from the spectrum and as can be seen, that value corresponds very closely with the center of the sample.
The voids appear to have somewhat narrower range of sizes than does that of the superclusters but that could easily be just a consequence of the small sample size. We also see that the 2 nd peak does not correlate with the size of any structure which we already determined by reviewing Table 3.
Because the sources are independent, we obtain the ensemble expectation value by combining the intensities rather than the field values of the photons.  In this case, the position of the peak is fixed by the distribution of Figure 37 so the agreement with the spectrum is now elevated to a prediction rather than the result of curve fitting. The conclusion is that the ensemble of superclusters and voids are responsible for the primary peak of the spectrum. Journal of High Energy Physics, Gravitation and Cosmology It would make things easy if we could now apply these same equations to the 2 nd and 3 rd peaks with an adjust distribution function but that is not the case. The issue is that, because these secondary peaks result from comparisons between different regions in a single supercluster, they are not uncorrelated and so the assumptions leading to (15)(16)- (15)(16)(17) are not valid. These must be extended to encompass a multipole expansion of the temperature distribution within a supercluster.
Finally, we come to the flat spectrum for all angles larger than about 6˚. This flat region is explained in the FRW model by a process involving quantum fluctuations of exotic meson fields during the FRW inflation. But according to this new model, none of that actually happened.
To proceed, we need to derive the spectrum that results from a scale-invariant source in the absence of any structure. If we focus on just a single originally Plank-sized region, we would calculate a peak similar to the peak in the previous so finally, we have [15][16][17][18][19][20][21][22][23][24][25] which in independent of the size of the Plank regions. We now invoke the essential fact of the scale-invariance which means that each multipole region acts like a random variable with an expectation value that is independent of the size of the region and so can be described by (15)(16)(17) with a single value for T T δ . Table 7 presents the calculated spectrum and Figure 39 which follows shows both the scale-invariant spectrum and the sum of that with the peak spectrum. Journal of High Energy Physics, Gravitation and Cosmology Table 7. Numerical values for the structure independent spectrum. We see that the curve drops off for small l at a value 4 l ≈ which corresponds to an angle of 45˚ which is indicated in Figure 33. Referring back to that figure, we see that 45˚ is also representative of the largest features in the CMB. While the presence of these is obvious from the figure, their contribution to the spectrum is much less obvious but they could account for the hint of a peak at the point where the flat spectrum drops off. The error bars are large, however, and because of their size, there are relatively few such structures compared to the superclusters so the statistical-based formalism just developed is perhaps not applicable for calculating their contribution to the spectrum. The flat spectrum between 6˚ and 45˚ indicates that there are no significant structures with sizes lying between those of the extreme structures and the superclusters.
There now remain the issues of explaining the temperature variances implied by the spectrum and the more significant problem of the accounting for the very existence of the superclusters and larger-sized structures at a time many orders of magnitude earlier than the time of star formation. This will be the subject of the next section. Journal of High Energy Physics, Gravitation and Cosmology Here, we will conclude with the problem of accounting for the temperature variance of the flat region. At the end of the initial inflation, the Plank variance . This is clearly much larger that the observed variance so some process must have intervened between the inflation and recombination to reduce the Plank variance. Referring again to Table 3, we see that the vacuum energy completely dominated spacetime during that period so the reduction must have been a consequence of the vacuum itself.
The energy variance of the new regions would subsequently be reduced by the square root of this number so we have . (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27) While this is larger than the actual value, it is close and a small change in the size of the merged region or the time that the inflation ended or both could account for the difference.

Tying Things Together
We will now summarize the picture of cosmology we have developed and then present data that ties these ideas together. The generally held view of the development of the universe is one of accretion initiate by small in homogeneities in an otherwise uniform distribution of particles at the end of nucleosynthesis.
Once the process started, it is supposed that particles coalesced via gravitational interaction into larger and larger structures. We have already shown in Sec. 12 that such an idea won't work but the really insurmountable problem with this concept is that it cannot explain the existence of superclusters much less the much larger structures evidenced by the CMB. As we noted earlier, at the time of recombination, even the smallest superclusters were 5 times larger than the sig-Journal of High Energy Physics, Gravitation and Cosmology nal distance so their existence cannot be explained by any process involving accretion and the problem only gets worse as one goes back in time because, as one does so, structures gets larger and larger relative to the signal distance. Accretion won't work and we have also shown in the previous section that no random process can account for the structures either.
The conclusion we reached was that the existence of all large structures was imprinted on spacetime during the initial inflation and it was this imprint that regulated the creation of neutrons and antineutrons at the time, t n , in such a manner that the resulting distribution eventually developed into the structures we now see.
From this perspective, all large structures were born with more or less their final sizes and masses with accretion playing only a subsidiary role. In fact, we will show that this process was responsible for all cosmic structures and not just the very large.
The three quantities that regulated the distribution of matter were the total vacuum energy, the fraction of that energy that was converted into neutrons and antineutrons and the fraction of that which determined the ratio by which the number of neutrons exceeded the number of antineutrons. On average across the entire universe, the total energy is given by (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24), the creation fraction was on the order of 10 −3 , and the asymmetry fraction was on the order of 10 −8 .
Another measure of the magnitude of the variances follows from the CMB spectrum. The observed temperature variance is fers by less than a factor of 2.5 all across the universe which is another argument for an origin in which length scales were not constrained by the speed of light.
The variance in the total energy density necessary to explain the spectrum is of ( ) We determined that the matter/antimatter asymmetry factor always had the same "sign" but that it too varied in magnitude from one place to another. In the regions with the greatest particle density, its value was around 2.4 × 10 −8 whereas in the voids, the factor was on the order of 1.9 × 10 −10 . We found then that the total number of neutrons and antineutrons initially created was much the same everywhere with a variance no larger than one part in 10 −7 and that the differences between the high-density regions and the voids are almost entirely a result of differences in the asymmetry factor. From observations, we know that the high-density regions tend to be warmer and vice-versa so these factors appear to be correlated. Referring again to Table 3, we see that, on a logarithmic scale, the Milky Way is actually much closer in size to clusters and even superclusters than it is to the size of a " n t " cell. This means that the dimensions that characterize these very small in magnitude imprint variances are vastly larger than the dimensions that characterized particle creation and nucleosynthesis.
If this was all there was to it, the universe would have ended up with a more or less uniform distribution of matter with no structure; a result that follows from the fact that small variances in matter density alone could not have initiated ac-cretion. That being the case, it follows that the controlling factor must have been largely or wholly a matter of extremely small variances in the properties of the vacuum and the fact that these variances were smooth on length scales vastly in excess of Lorentz limitations implies that they must have originated during the initial Plank era inflation.
Having reached this conclusion, we will now consider observational data that supports these ideas. What we will show is that the distribution of cosmic structures places significant constraints on possible structure creation models. In Figure 40 and Figure 41, we show the count of cosmic structures as a function of their size and mass respectively. Combining these gives the size as a function of mass with the result shown in Figure 42.  To make these plots, we needed to know the sizes, masses, and counts of all types of structures. The sizes and masses of each type are reasonably well known.
The counts are less reliable and in some case are just estimates based on local densities. For example, to estimate the total number of dwarf galaxies, we used the fact that there are roughly 50 associated with the Milky Way while some other large galaxies are thought to have counts as high as 10 5 . Allowing for a range of values and multiplying by the number of large galaxies gave us an estimate of the total. The extreme structures are representative of the apparent 45˚ structures visible in the CMB anisotropy map.
Referring first to Figure 40, what is remarkable is that, with the exception of the extreme structures, all these structures with their vast range of sizes lie on a power law curve. The extreme structures lie below the curve but this is just a consequence of the finite size of the universe since the maximum count of any structure cannot exceed the number that would fill the universe.
Similarly, in Figure 41, with the exception of stars and again the extreme structures, the mass distribution also follows a power law curve. The extreme structures lie below the curve because of finite mass of the universe. Stars are the exception because they are obviously far more massive relative to their size that any of the other structures and certainly in their case, accretion was, and is, significant factor.
The following formulas for the curves give the corresponding power law coefficients. These are not model predictions but rather parameterized curves adjusted to match the data. We chose to use superclusters as the reference. What we are going to argue now is that these results not only support the notion of a Plank era imprint being responsible for the distribution of structures but also that the imprint is correctly described as a fractal geometry. Concentrating now on Figure 40, there are two model curves shown. The dashed blue line which is given by (  gives the count of structures necessary to fill the entire volume of space as a function of their size. The extreme structures line on this curve by definition but what is more interesting is that superclusters also lie on this curve which implies that in an order of magnitude sense, they fill all space. The model line of (16-1), however, is where it gets interesting.
We now want to introduce the idea of fractal dimension. Equation (16-4) is a simple formula that gives the count of objects of a given size need to fill a 3-dimensional space. Similarly, the number needed to fill a 2-dimensional surface would be ( ) 2 a s . We can write this generally as d C r = .
(  where r is the magnification factor and d is the dimension of the space which in common usage would be an integer. The idea of a fractal geometry is one in which the same general formula holds but the dimension can have any value, not just an integer value. But this is exactly the form of (16-1) and from this, we learn that the initial imprint that defined all the structures we observefrom stars on up to superclusters was a fractal geometry with a (box) dimension of There are a few consequences that follow immediately. First, not only are fractal geometries non-differentiable but it has been proven that all non-differentiable geometries are fractal (see e.g. [33]) so this model uniquely satisfies our earlier contention that the Plank era must be described in terms of a non-differentiable manifold.
The second point is that with a dimension only slightly larger than one, the basis of cosmic structures must be in the form of filaments. Thus, we find that two seemingly unrelated facts, namely the count distribution and the filament structure of space have a single origin. It is unavoidable that a universe with the counts we observe must also have a filament structure and vice-versa. Either way, it is fractal.
The third point follows from the fact that the scatter away from the line is not large. Remember that the fractal imprint cannot be responsible for more than the initial size and mass of the structures and that these structures would be subject to subsequent gravitational influences from that point on. What the small scatter tells us is that, with the exception of stars, the subsequent interac-Journal of High Energy Physics, Gravitation and Cosmology tions had little effect on their sizes and masses or, in other words, that accretion was not the overriding factor in their development. Another fact in support of these ideas is that the volume of background space (1 m −3 ) necessary to form a single star solely by accretion is roughly volume of a globular cluster.
A fourth point hinting at a common origin of the structures is that they have distinct sizes with no overlap. Put another way, if the structures were purely the result of accretion, one would expect to find a continuum of sizes instead of, in some vague way, the multipole distribution that we observe.
A fifth point is that we again find an equivalence between vacuum energy and dark matter but with a far more detailed understanding of how the filament structure came to exist.
We will now return to the issue of causality. The expansion of the scaling occurs at every point independent of any influence from any other point. In order to form structures, on the other hand, coordination between different locations is necessary which in a normal situation would imply an exchange of signals.
Given the results of (4-12), the speed of any such signals would be scaled by 26 1 10 m s I I a t − = ⋅ which in practical terms is approaching infinity so the whole concept of normal exchange is probably wrong. What seems more likely is that, because of the uncertainties of time and dimension, different regions had, in some way, effectively a zero separation so a change in one location was a change over a region. This, however, is total speculation at this point and given our lack of even a framework to work with, we really can't say what process accounted for the formation of structures.
So, at the end, we are back where we started. We need a new understanding of the Plank era to make further progress because it was during that era that the "DNA" that defined the universe originated. What we have learned is that there must have been a Plank era during which an exponential inflation occurred. Not the least of the arguments for that inflation is the fact that without it, the present-day size of the universe would be measured in fractions of a meter. We have seen that during that era, the normal ideas of causality did not apply and that structure in the vacuum energy developed that exhibits a fractal geometry.
We have thus defined a number of constraints that must be satisfied but do not yet have a model of how this all happened.
The fact of a Plank era, however, leaves us with another problem. It has been noted by many people that expressions such as ( ) is just a combination of constants so one is faced with the problem of explaining how these combinations of constants just happen to match up with the reality of the Plank era. It is beyond imagining that the agreement could be just a coincidence so we are led to the idea that P l , P t , etc. are, in fact, the fundamental entities and physical constants such as c are properties of the vacuum that derive from these entities, e.g. P P c l t = .
In other words, we are reading the Plank relations the wrong way around. This notion also hints at a solution of the causality problem because, according to our thinking, the Plank quantities were initially subject to Journal of High Energy Physics, Gravitation and Cosmology uncertainty so it follows that the value of c, for example, was also uncertain and it did not obtain its final (certain) value until after the end of the inflation when the uncertainties became negligible compared to the age of the universe.
We mentioned in the introduction that much effort has gone into the study of non-commutative geometry with the aim of formalizing the notions of coordinate uncertainty and non-differential manifolds. While this shows that people have been thinking about that Plank era problem for a considerable period of time, the results so far have been nil as far as any application to Plank era physics is concerned and do not even begin to approach the problem of explaining the existence of the very large, relative to Lorentz limits, and also very smooth structures that must have existed.

Alternate Theories
Over the years a number of extensions of the original theory of gravitation have been directed towards solving a range of shortcomings of the standard model. As pointed out in [34], these extensions can be grouped into those in which the left-hand side of the equation is modified, for example, by the addition of higher order powers of the Riemann tensor and those in which additional contributions to the right-hand side are included.
Left-hand extensions have, for example, have been applied by those seeking to achieve a unification of gravity with the other fields. None of these efforts, however, have achieved any success which, in our opinion isn't surprising because we believe that gravitation is fundamentally different from other fields and that a unified theory is just wishful thinking. There is certainly no observational evidence that any such unification exists.
Right-hand extensions, on the other hand, have led to the development of theories incorporating ad hoc entities such as dark energy (cosmological constant) and dark matter. These entities are considered to be unrelated with dark energy distributed uniformly and dark matter distributed in clusters. The models do not explain what those entities are but calculations incorporating those entities can be made to match observations by adjusting various parameters. There are a number of problems with such models, however. For example, it is considered a mystery that the magnitude of the cosmological constant is so small. The models also do not explain why dark matter is always in close association with ordinary matter. As just noted, all the results obtained are dependent on curve fitting and it is a serious defect of these models that they do not actually predict anything solely on the basis of the metric and Einstein's equations. By choosing appropriate parameters any sort of evolution can be obtained.
We will now compare it with the new model. Leaving aside the problem of the Plank era, what we have shown is that by formulating a model that incorporates time-varying curvature, a significant number of the outstanding problems are solved. For example, the acceleration of the scaling is a parameter independent prediction of the model which has nothing to do with a cosmological constant or Journal of High Energy Physics, Gravitation and Cosmology equivalently, dark energy. In fact, the concept of dark energy in the standard model sense simply does not exist. What does exist is time-varying vacuum energy whose present-day energy density is predicted to be close in magnitude to that so-called dark energy so the smallness of the magnitude is no mystery at all.
We have also shown that so-called dark matter is, in fact, just another manifestation of the same vacuum energy and that is association with ordinary matter is easily explained. Finally, in contrast to the ad hoc models in which any sort of evolution is possible, the new model is totally constrained by Einstein's equations; there are no adjustable parameters, and only one evolution is possible.
This model stands as an alternative to the extended models of gravitation that solves many of the outstanding problems while, at the same time, bringing us back to the original concept of gravitation.
So, is this model is the final answer? It certainly appears to be closer to the truth than any of the other models thus far proposed but this paper represents only the starting point of a new direction in cosmology. In particular, as shown in the above reference, gravitational wave astronomy has the potential for detecting small model deficiencies and with this in mind, in a subsequent paper, we plan to examine gravitational waves within the context of our new model.

Conclusions
In this paper, we present a new model of cosmology based on very few assumptions that completely avoids any type of exotic particle, field theory, or cosmological constant. A considerable number of predictions have been made that are in agreement with observations. Among the highlights, the new model. 1) proposes that the Big Bang began with a Plank era period of exponential inflation driven by uncertainty principle effects and time-varying spacetime curvature. It is shown that time variation of the curvature is a decisive factor driving the evolution of the universe and that the present-day structure of the universe had its origin in very small in amplitude but exceeding large, in dimension, variances that came into existence during the inflation.
2) presents an exact solution of Einstein's equations that predicts an acceleration of the present-day expansion of the universe. This model has no adjustable parameters. The solution reconciles the homogeneity and isotropy of spacelike hypersurfaces with time-varying curvature and produces a number of exact results including the prediction that the curvature is proportional to the sum of the vacuum energy density and pressure and that the curvature always has its maximum possible value. The model also makes a prediction of the luminosity distance that matches the data and points to a solution to the problem researchers are having in trying to determine the Hubble constant.
(3) shows that all physical quantities such as the scaling, the curvature of spacetime, and the motion of particles are dependent on only the sum of the vacuum energy, pressure, and particle mass energy equivalent at any point in spacetime and that this sum varies as with time as 2 t − independent of the scaling. Journal of High Energy Physics, Gravitation and Cosmology 4) proposes an origin of ordinary matter that is in no way connected with conventional field theory. A detailed model of nucleosynthesis is presented that accounts for both the CMB and the matter-antimatter asymmetry. Although it is a minor point, we also show that the so-called Lithium problem is actually nothing more than a procedural issue.

5)
shows that the phenomena that dark matter was proposed to explain can be readily understood as consequences of the vacuum energy thereby establishing the fact that dark matter is vacuum energy.
6) proposes a new explanation for the CMB spectrum. We show that the large peaks are a consequence of superclusters and voids and that the large angle flat spectrum is a consequence of energy uncertainties embedded in spacetime at the termination of the initial inflation.
7) shows that the basis for all cosmic structure was a fractal geometry imprint that originated during the initial Plank era inflation.