Do Physical Laws/Physics Parameter Constants Remain Invariant from a Prior Universe, to the Present Universe? ()
1. Introduction
The author, Beckwith, is aware of how data compression, and organized information from a prior universe to a present universe is often mistakenly conflated with intelligent design. In order to avoid such specious logic, the present paper’s inquiry is restricted to the essentials of finding what minimum amount of information transfer from a prior to a present universe is necessary to possibly preserve the minimum structure and character of physical law from prior to present universe, i.e. Dr. Beckwith has no interest in following in the footsteps of Dr. Tippler. Secondly, the author, Beckwith, is fully aware that photons in our present day have no mass. A speculation as to a tiny effective minimum photon mass, is presented along the lines of Honig’s (1974) [1] document before red shift values of Z = 1100. Before 380 thousand years after the big bang, there was still photon related to cosmological evolutions as defined by J. A. S. Lima (1996), [2] which can be summarized, for temperature related behavior as photons having number and energy densities specified as
, so that for an instantaneous comoving number of photons, Lima writes
, where
is for background temperature and states that this value
must be a constant. Lima quotes a researcher, Steigman in saying that “Unless the number of comoving photons in a comoving volume is constant, a blackbody distribution (of photons) is destroyed as the universe evolves”. In addition, Lima’s [2] key result which can be summarized as follows, that even if
has a changing time component that there exists entropy associated with photons,
so that the following relationship holds for
any Friedman style cosmology, namely
,
where the dot is time. This has been tied into non linear electrodynamics as well be remarked on in the end of our document [3] .
If what is suggested by Beckwith [4] (2009), with respect to his revision of Y. Ng’s counting algorithm [5] is correct, with respect to early universe conditions
is correct, i.e.
is also equal to a ratio
of the time derivative of the number of gravitons, over the number of gravitons, and this in term is equal to the time derivative of entropy of graviton production, over entropy of graviton production at the onset of the universe, then in fact what one is working with is, de facto, one is looking at, then for initial conditions of
. (1)
This should be a starting point to the analysis which proceeds in this paper, i.e. Equation (1) as compared with or larger at the origins of the big bang will be a starting point in information/data comparison. Note, if Equation (1) holds, and
grams, then maybe photons have tiny mass. And all this can be compared with the reasoning leading to the tiny graviton mass given by Beckwith, in his Hindawi publication as to the mass of a graviton in terms of space time dynamics [6] as well as the consequences.
2. How to Compare Equation (1) with Photon Entropy “Information” Compared with Graviton Entropy Information
We will now begin to try to make an equivalence between
, and Equation (1) above.
This after a time lead Beckwith to adopt a tiny mass to the graviton, in line with Honig’s paper [1] doing the same, Note that this present paper, written by Beckwith is to evaluate what is the minimum amount of INFORMATION from a prior universe to our present which would permit the same sort of physical laws in a prior universe, to our present universe. If the basic physical constants remain the same from a prior to our present universe, then the basic characteristic of physical law will remain invariant. Otherwise, different universe cycles will have different physics. For our own universe, experimental evidence places an upper limit on how much the “constants” could have changed. Broadly, the answer is: at most one percent over the lifetime of the universe, in our present cycle of creation. One nice piece of evidence comes from Supernova 1987a, which was special because it was not very far away. Theory predicts that such a supernova would create about 0.1 solar masses of nickel-56, which is radioactive. Nickel-56 decays with a half-life of 6.1 days into cobalt-56, which in turn decays with a half-life of 77.1 days. Both kinds of decay give off very distinctive gamma rays. Analysis of the gamma rays from SN1987a showed mostly cobalt-56, exactly as predicted. And, the amount of those gamma rays died away with exactly the half-life of cobalt-56. For more details, read: Neil Gehrels et al. (1993) [7] , and Whitelock et al. (1991) [8] . Two possibilities: First is that from a prior to a present universe, there is essentially the same range of physical constants. Secondly is that from a prior to our present universe that the values of the physical constants varied significantly. A third possibility is that if multiple universes existed, i.e. the typical “baby” universes, with a brute “Darwinian selection” criteria as to which universe may, or may not have survived, leading to say the present cosmos as one of the few lucky survivors of emergence from a prior cycle. If this third possibility is the case, then there would be no need for any data compression to preserve continuity of physical laws. In the article “Quantum entanglement of baby universes”, Aganagic, Mina; Okuda, Takuya; Ooguri, Hirosi [9] elucidate the possibility that the parent (prior) universe generates baby universes by brane/ anti-brane pair creation, and baby universes are correlated by conservation of non-normalizable D-brane charges under the process, i.e. this leaves unsaid if or not there is a selection process favoring the existence of a favored ‘baby universe’ which survived to become our universe, but it offers a mechanism as to how a family of universes could arise. The author, Beckwith, gave his version of such a hypothesis (2009) [4] in one of his earlier “entropy” articles, as an extension of Penrose’s (2007) [10] supposition of a variant of a cyclic universe hypothesis which does not explicitly use branes and anti branes. This seems to assume that the physical constants are the same. How would we know that? Answer, is that we do not know it. Part two by necessity breaks down the possible outcomes into three cases. The first case by necessity would mandate some form of data compression. Of which then a methodology is proposed as to how to conserve a minimum amount of information needed for a 1-1 mapping of physical constants from a prior universe to our present. The second and third case may be in sync with the hypothesis of causal discontinuity, as stated by A. W. Beckwith’s (2008, 2009) [11] where he turned Fay Dowkers [12] hypothesis of causal ordering on its head. And, the issue of how entropy, and its generation from a point of causal break down will be part of a resolution which the author, Beckwith, will present as relevant to determining if or not there is a way to distinguish between LQG and String/Brane theory.
3. Minimum Amount of Information Needed to Initiate Placing Values of Fundamental Cosmological Parameters, as Opposed to the Baby Universe/Darwinian Selection
A. K. Avessian’s article (2009) [13] about alleged time variation of Planck’s constant from the early universe depends heavily upon initial starting points for
, as given below, where we pick our own values for the time parameters, for reasons we will justify in this manuscript:
. (2)
The idea is that we are assuming a granular, discrete nature of space time. Furthermore, after a time we will state as t ~ tPlanck there is a transition to a present value of space time, which is then probably going to be held constant. It is easy to, in this situation, to get an inter relationship of what
is with respect to the other physical parameters, i.e. having the values of
written as
, as well as note how little the fine structure constant actually varies. Note that if we assume an unchanging Planck’s mass
, this means that G has a time variance, too. This leads to us asking what can be done to get a starting value of
recycled from a prior universe, to our present universe value. What is the initial value, and how does one insure its existence? We obtain a minimum value as far as “information” via appealing to Hogans [14] (2002) argument where we have a maximum entropy as
(3)
and this can be compared with A. K. Avessian’s article (2009) [13] value of, where we pick
(4)
i.e. a choice as to how
has an initial value, and entropy as scale valued by
gives us a ball park estimate as to compressed values of
which would be transferred from a prior universe, to today’s universe. If
, this would mean an incredibly small value for the INITIAL H parameter, i.e. in pre inflation, we would have practically NO increase in expansion, just before the introduction vacuum energy, or emergent field energy from a prior universe, to our present universe. Typically though, the value of the Hubble parameter, during inflation itself is HUGE, i.e. H is many times larger than 1, leading to initially very small entropy values. This means that we have to assume, initially, for a minimum transfer of entropy/information from a prior universe, that H is negligible. If we look at Hogan’s [14] holographic model, this is consistent with a non finite event horizon
. (5)
This is tied in with a temperature as given by
. (6)
Nearly infinite temperatures are associated with tiny event horizon values, which in turn are linked to huge Hubble parameters of expansion. Whereas initially nearly zero values of temperature can be arguably linked to nearly nonexistent H values, which in term would be consistent with
as a starting point to entropy. We next then must consider how the values of initial entropy are linkable to other physical models, i.e. can there be a transfer of entropy/information from a pre inflation state to the present universe. Doing this will require that we keep in mind, as Hogan [14] writes, that the number of distinguishable states is writable as
. (7)
If, in this situation, that N is proportional to entropy, i.e. N as ~ number of entropy states to consider, , then as H drops in size, as would happen in pre inflation conditions, we will have opportunities for N ~ 105.
4. Is Data Compression a Way to Distinguish What Information Is Transferred to the Present Universe?
The peak temperature as recorded by Weinberg (1972) [15] is of the order of 1032 Kelvin, and that would imply using the expansion parameter, H, as given by Equation (6) above. Likely before the onset of inflation, due to dimensional arguments, it can be safe to call the pre inflation temperature, T as very low, i.e. there was a buildup of temperature, T, at the instant before inflation, which peaked shortly afterwards. Such an eventuality would be consistent with use of a worm hole bridge from a prior to a present universe. Beckwith (2008) [16] at STAIF used such a model as a transfer of energy to the present universe, using formalism from Lawrence Crowell’s book (2005) [17] .
A useful model as far as rapid transfer of energy would likely be a quantum flux, as provided for in Deformation quantization. We will follow the following convention as far as initiating quantization, i.e. the reported idea of Weyl quantization which is as follows: For a classical
, a corresponding quantum observable is definable via
. (8)
Here, C is the inverse fourier transform, and w(,) is a weight function, and p, and q are canonical variables fitting into
, and the integral is taken over weak topology. For a quantized procedure as far as refinement of poisson brackets, the above, Weyl quantization is, as noted by S. Gutt and S. Waldemann (2006) [18] equivalent to finding an operation
for which we can write
. (9)
As well as for Poisson brackets,
, obeying
, and
. (10)
For very small regimes of spatial integration, we can approximate Equation (8) as a finite sum, with
. (11)
What we are doing is to give the following numerical approximate value of, de facto, as follows
(12)
and then we can state that the inverse transform is a form of data compression of information. Here, we will state that
~
~ {information bits for}
as far as initial values of the Planck’s constant are concerned. Please see Appendix IV as to how for thin shell geometries the Weyl quantization condition reduces to the Wheeler De Witt equation, i.e. a wave functional approximately presentable as
(13)
where R refers to a spatial distance from the center of a spherical universe. Appendix IV is an accounting of what is known as a pseudo time dependent solution to the Wheeler de Witt equation involving a worm hole bridge between two universes. The metric assumed in Appendix I is a typical maximally symmetric metric, whereas Appendix II is using the Reissner-Nordstrom metric. We assume that to first order, if the value of R in
is nearly
centimeters, i.e. close to singularity conditions, that the issue of how much information from a prior universe, to our own may be addressed, and that the solution
is consistent with regards to Weyl geometry. So let us consider what information is transferred. We claim that it centers about enough information with regards to preserving
from universe cycle to cycle.
To begin this inquiry, it is appropriate to note that we are assuming that there is a variation in the value of
with a minimum value of
centimeters to work with. Note that Honig’s (1973) [1] article specified a general value of about
grams, per photon, and that each
photon has an energy of
. If one photon is, in energy
equivalent to 1012 gravitons, then, if
= Planck’s length, gives us a flux value as to how many gravitons/entropy units are transmitted. The key point is that we wish to determine what is a minimum amount of information bits/attendant entropy values needed for transmission of
. In order to do this, note the article, i.e. a “A minimum photon “rest mass”―Using Planck’s constant and discontinuous electromagnetic waves which as written in September, 1974 by William Honig [1] specifies a photon rest mass of the order of 3.68 × 10−48 grams per photon. If we specify a mass of about 10−60 grams per graviton, then to get at least one photon, and if we use photons as a way of “encapsulating”
, then to first order, we need about 1012 gravitons/entropy units (each graviton, in the beginning being designated as one “carrier container” of information for one unit of
. If as an example, as calculated by Beckwith (2008) [16] that there were about 1021 gravitons introduced during the onset of inflation, this means a minimum copy of about one billion
information packets being introduced from a prior universe, to our present universe, i.e. more than enough to insure introducing enough copies of
to insure continuity of physical processes. For those who doubt that 10−60 grams per graviton can be reconciled with observational tests with respect to the Equivalence Principle and all classical weak-field tests, we refer the readers to Matt Visser’s (1998) [19] article about “Mass for the graviton”. The heart of Matt Visser’s [19] calculation for a non zero graviton mass involve placing appropriate small off diagonal terms within the usual stress tensor T(u,v) calculation, a development which in certain ways compliments what was done by C. S. Unnikrishnan’s (2009) [20] revision of special relativity, in ways which will be described in this document.
5. Entropy, Comparing Values from T(u,v) Stress Energy, Black Holes, and General Entropy Values Obtainable for the Universe
We start off with looking at Vacuum energy and entropy. This suggests that entropy scaling is proportional to a power of the vacuum energy, i.e., entropy ~ vacuum energy, if is interpreted as a total net energy proportional to vacuum energy, i.e. go to equation 10 above. What will be done is hopefully, with proper analysis of
at the onset of creation, is to distinguish, between entropy say of what Mathur [21] wrote, as
, and see how it compares with the entropy of the center of the galaxy i.e. Equation (14), as opposed to the entropy of the universe, as given by Equation (15) below. The entropy which will be part of the resulting vacuum energy will be writable as either Black hole entropy and/or the Universe’s entropy, i.e. for black hole entropy, from Sean Carroll (2005) [22] , the entropy of a huge black hole of mass M at the center of the milky way galaxy. Note there are at least a BILLION GALAXIES, and M is ENORMOUS
. (14)
This needs to be compared with the entropy of the universe, as given by Sean Carroll, as stated by
. (15)
The claim made here is that if one knew how to evaluate
properly, that the up to 109 difference in Equations (14) and (15) will be understandable, and that what seems to be dealt with directly. So, how does one do this? The candidate picked which may be able to obtain some commonality in the different entropy formalisms is to confront what is both right and wrong in Seth Lloyd’s entropy treatment in terms of operations as given below. Furthermore, what is done should avoid the catastrophe inherent in solving the problem which Mithras [23] gave the author, that of dS/dt = ∞ at S = 0 in Kochi, India, as a fault of classical GR which should be avoided. One of the main ways to perhaps solve this will be to pay attention to what C. S. Unnikrishnan [20] put up in 2009, i.e. his article about the purported one way speed of light, and its impact upon perhaps a restatement of
. A restatement of how to evaluate
may permit a proper frame of reference to close the gap between entropy values as given in Equations (14) and (15) above.
6. Simple Relationships to Consider (with Regards to Equivalence Relation Ships Used to Evaluate T(u,v))
What needs to be understood and evaluated is, if there is a re structuring of an appropriate frame of reference for
and its resultant effects upon how to reconcile black hole entropy, A good place to start would be to obtain
values which are consistent with slides on the two way versus one way light speed presentation of the ISEG 2009 conference [20] . We wish to obtain
values properly analyzed with respect to early universe metrics, and PROPERLY extrapolated to today so that ZPE energy extraction, as pursued by many, will be the model for an emergent field development of entropy. Note the easiest version of
as presented by Wald [24] . If metric
is for curved space time, the simplest matter energy stress tensor is (Klein Gordon)
. (16)
What is affected by Unnikrishnan’s [20] presented (2009) hypothesis is how to keep
properly linked observationally to a Machian universe frame of reference, not the discredited aether, via CMBR spectra behavior. If the above equation is held to be appropriate, and then elaborated upon, the developed
expression should adhere to Wald’s unitary equivalence principle. The structure of unitary equivalence is foundational to space time maps, and Wald [24] states it as being
. (17)
While stating this, it is important to keep in mind that Wald defines [24]
. (18)
We defined the operation, where A is a bounded operator, and
an inner product via use of
. (19)
Data compression, continuity, and Dowker’s space time sorting algorithm [12] .
This is closely tied in with data compression and how much “information” material from a prior universe is transferred to our present universe. In order to do such an analysis of data compression and what is sent to our present universe from a prior universe, it is useful to consider how there would be an eventual increase in information/entropy terms, from 1021 to 1088. Too much rapid increase would lead to the same problem ZPE researchers have, i.e. if Entropy is maximized too quickly, we have no chance of extracting ZPE energy from a vacuum state, i.e. no emergent phenomena is possible. What to avoid is akin to avoiding [25]
. (20)
Equation (20) is from Giovanni [25] , and it states that all entropy in the universe is solely due to graviton production. This absurd conclusion would be akin, in present day parlance, to having 1088 entropy “units” created right at the onset of the big bang. This does NOT happen.
What will eventually need to be explained will be if or not 107 entropy units, as information transferred from a prior big bang to our present universe would be enough to preserve
, G, and other physical values from a prior universe, to today’s present cosmology. Inevitably, if 107 entropy/information units are exchanged via data compression from a prior to our present universe. Equation (20), and resultant increases in entropy up to 1088 entropy “units” will involve the singularity theorems of cosmology, as well as explanations as to how
could take place, say right at the end of the inflationary era. The author claims that to do so, that Equation (20), and a mechanism for the assembly of gravitons from a kink-anti kink structure is a de rigor development. We need to find a way to experimentally verify this tally of results. And to find conditions under which the abrupt reformulation of a near-constant cosmological constant, i.e., more stable vacuum energy conditions right after the big bang itself, would allow for reformulation of SO(4) gauge-theory conditions. This is the opposite of what Dowker was presenting [12] , which we argue would be.
7. What Is the Bridge between Low Entropy of the Early Universe and Its Rapid Buildup Later?
Penrose in a contribution to a conference, (2006) [10] on page two of the Penrose conference (2006) [10] document refers to the necessity of reconciling a tiny initial starting entropy of the beginnings of the universe with a much larger increased value of entropy later. As can be read from the article by Penrose (2006) [10] “A seeming paradox arises from the fact that our best evidence for the existence of the big bang arises from observations of the microwave background radiation…”, “This corresponds to maximum entropy so we reasonably ask: how can this be consistent with the Second law, according to which the universe started with a tiny amount of entropy”. Penrose [10] then goes on to state that “The answer lies in the fact that the high entropy of the microwave background only refers to the matter content of the universe, and not the gravitational field, as would be enclosed by its space-time background in accordance to Einstein’s theory of general relativity”. Penrose then goes on to state that the initial pre red shift equals 1100 background would be remarkably homogeneous, i.e. for red shift values far greater than 1100 the more homogeneous the universe would become according to the dictum that “gravitational degrees of freedom would not be excited at all” Beckwith (2008) [16] then asks the question of how much of a contribution the baryonic matter contribution would be expected to make to entropy production. The question should be asked in terms of the time line as to how the universe evolved, as specified by both Steinhardt and Turok (2007) on pages 20-21 [26] of their book. And a way to start this would be to delineate further the amplitude vs frequency GW plot as given below. It is asserted that the presence of the peak in gravity wave frequency at about 1010 Hertz has significant consequences for observational cosmology. Finding an appropriate phase transition argument for the onset of entropy creation and graviton production while using the results of Kolb and Turner [27]
(21)
is akin to explaining how, and why temperature changes in T, lead to, if the temperature increases, an emergent field description of how gravitons arose. We claim that this is identical to obtaining a physically consistent description of entropy density would be akin to, with increasing , then decreasing temperatures a study as to how kink-anti kink structure of gravitons developed. This would entail developing a consistent picture, via SO(4) theory of gravitons being assembled from a vacuum energy back ground and giving definition as to Seth Lloyd’s [27] computation operation description of entropy. Having said this, it is now appropriate to rise what gravitons/HFGW may tell us about structural evolution issues in today’s cosmology. Here are several issues the author is aware of which may be answered by judicious use of HFGWs. As summarized by Thanu Padmanabhan [28] (IUCAA) in the recent 25th IAGRG presentation he made, “Gravity: The Inside Story”, entropy can be thought of as due to “ignored” degrees of freedom, classically, and is generalized in general relativity by appealing to extremizing entropy for all the null surfaces of space time. Padmanabhan [28] claims the process of extremizing entropy then leads to equations for the background metric of the space-time, i.e. that the process of entropy being put in an entropy extremized form leads to the Einsteinian equations of motion. What is done in this present work is more modest, i.e. entropy is thought of in terms of being increased by relic graviton production, and the discussion then examines the consequence of doing that in terms of GR space time metric evolution. How entropy production is tied in with graviton production is via recent work by Jack Ng. It would be exciting if or not we learn enough about entropy to determine if or not we can identify null surfaces, as [28] brought up in his presentation in his Calcutta (2009) [28] presentation. The venue of research brought up here we think is a step in just that direction. Furthermore, let us now look at large scale structural issues which may necessitate use of HFGW to resolve. Job one will be to explain what may the origins of the enormous energy spike in Figure 1 above, by paying attention to Relic gravitational waves, allowing us to make direct inferences about the early universe Hubble parameter and scale factor (“birth” of the Universe and its early dynamical evolution). According to Grishchuk [29] : energy density requires that the GW frequency be on the order of (10 GHz), with a sensitivity required for that frequency on the order of 10−30 δm/m. Once this is obtained, the evolution of cosmological structure can be investigated properly, with the following as targets of opportunity for smart applications of HFGW detectors.
8. How the CMBR Permits, via Maximum Frequency, and Maximum Wave Amplitude Values, an Upper Bound Value for Massive Graviton Mass mg
Camp and Cornish (2004) [30] , as does Fangyu Li [31] (2008) use the typical transverse gravitational gauge
with a typically traceless value summed as
and off diagonal elements of
on each side of the diagonal to mix with a value of
. (22)
Figure 1. Self explanatory. From Subir Sarkar’s bad Honnif07 talk. Reproduced here with permission of Dr. Sakar in email communication [36] .
This assumes r is the distance to the source of gravitational radiation, with the
retarded designation on Equation (22) denoting
replaced by a retarded time derivative
, while TT means take the transverse projections
and substract the trace. Here, we call the quadrupole moment, with
a density measurement. Now, the following value of the
as given gives a luminosity function L, where R is the “characteristic size” of a gravitational wave source. Note that if M is the mass of the gravitating system
(23)
. (24)
After certain considerations reported by Camp and Cornish (2004) [30] , one can recover a net GW amplitude
. (25)
This last equation requires that
gravitational radius of a
system, with a black hole resulting if one set 5s
. Note that when
we are at an indeterminate boundary where one may pick our
system as having black hole properties.
Now for stars, Camp and Cornish (2004) [30] give us that
(26)
. (27)
As well as a mean time
for half of gravitational wave potential energy to be radiated away as
. (28)
The assumption we make is that if we model
, for a sufficiently
well posed net mass M that the star formulas roughly hold for early universe conditions, provided that we can have a temperature T for which we can
use the approximation
that we also have
or higher, so, that at a minimum we recover Grishchuck’s (2007)
[29] value of
. (29)
Equation (29) places, for a specified value of R, which can be done experimentally, an upper bound as far as far as what a mass M would be. Can this be exploited to answer the question of if or not there is a minimum value for the Graviton mass?
The key to the following discussion will be that
, or larger. (30)
9. Inter Relationship between Graviton Mass
and the Problem of a Sufficient Number of Bits of
from a Prior Universe, to Preserve Continuity between Fundamental Constants from a Prior to the Present Universe
P. Tinyakov (2006) [32] gives that there is, with regards to the halo of sub structures in the local Milky Way galaxy an amplitude factor for gravitational waves of
. (31)
If we use LISA values for the Pulsar Gravitational wave frequencies, this may mean that the massive graviton is ruled out. On the other hand
leads to looking at, if
. (32)
If the radius is of the order of
billion light-years ~ 4300 Mpc or much greater, so then we have, as an example
. (33)
This Equation (33) is in units where
.
If 10−60 grams per graviton, and 1 electron volt is in rest mass, so
. Then
. (34)
Then, there exist
. (35)
If each photon, as stated above is
grams per photon, then
initially transmitted photons. (36)
Furthermore, if there are, today for a back ground CMBR temperature of 2.7 degrees Kelvin
, with a wave length specified as
. This is for a numerical density of photons per cubic meter given by
. (37)
As a rough rule of thumb, if, as given by Weinberg (1973) [32] that early quantum effects, for quantum gravity take place at a temperature
Kelvin, then, if there was that temperature for a cubic meter of space, the numerical density would be, roughly 10132 times greater than what it is today. Forget it. So what we have to do is to consider a much smaller volume area. If the radii of the volume area is
, then we have to work with a de facto initial volume
, i.e. the numerical value for the number of photons at
, if we have a per unit volume area based upon Planck length, instead of meters, cubed is
photons for a cubic area with sides
at
Kelvin However,
initially transmitted photons! Either the minimum distance, i.e. the grid is larger, or
Kelvin Tie in with string theory to resolve the 1019 difference in number of photons transmitted from a prior universe to our present.
Typically, the minimum length as stated by string theory, we have
. (38)
Here, we either have
, or
Kelvin. (39)
Another issue as to the tensor/scalar ratio is one of if there is a simple consistency relation from the running of the tensor-to-scalar ratio. As noted by Jinn-Ouk Gong, (2007) [33] , this new relation is first order in the slow-roll approximation. While for single field models we can obtain what can be found by using other observables, multi-field cases in general give non-trivial contributions dependent on the geometry of the field space and the inflationary dynamics, which can be probed observationally from this relation. Gong asserts that laser interferometry will allow to determine if inflaton theories should be either single field variety, or multiple field varity, and this is, if confirmed not that different from determining the nature of emergent gravity, i.e. examining if or not Kuchiev, M. Yu’s [34] supposition appearing in Classical and Quantum gravity of if or not the polarization of instantons affect/control how gravity appears in the onset of inflation. If multiple fields are confirmed, this may necessitate looking at inhomogeneities in the CMBR, as postulated by Hunt and Sarkar (2008) [35] . In any case, the basic physics of how to interpret scalar and tensor contributions to the CMBR are briefly alluded to in Appendix I and Appendix II of this paper. The Hunt-Sarkar (2008) [35] case of multiple fields may, by necessity lead to analyzing multiple race track inflation, as allude to, in Appendix III.
10. Conclusions
Let us first reference what can be done with further developments in deformation quantization and its applications to gravitational physics. The most note worthy centers upon Grassman algebras and deformation quantization of Fermionic fields, i.e. Galaviz (2007) [37] showed that one can obtain a Dirac propagator from classical versions of Fermionic fields, and this was a way to obtain minimum quantization conditions for initially classical versions of Fermionic fields due to alterations of algebraic structures, in suitable ways. One of the aspects of early universe topology we need to consider is how to introduce quantization in curved space time geometries, and this is a problem which would, among other things permit a curved space treatment of
, i.e. as R gets of the order of
, say that the spatial geometry of early universe expansion is within a few orders of magnitude of Planck length, then how can we recover a field theory quantization condition for
in terms of path integrals. We claim that deformation quantization, if applied successfully will eventually lead to a great refinement of the above Wheeler De Witt wave functional value, as well as allow a more through match up of a time independent solution of the Wheeler De Witt equation, as given in Appendix IV, with the more subtle pseudo time dependent evolution of the wave functional as given in Beckwith (2009) [4] in the third companion piece to this series of article, as well as Beckwith’s (2008, 9) adaptation of L. Crowell’s (2005) book [17] , i.e. the linkage between time independent treatments of the wave functional of the universe, with what Lawrence Crowell wrote up in 2005, will be made more explicit. This will , in addition allow us to understand better how graviton production in relic conditions may add to entropy, as well as how to link the number of gravitons, say 1012 gravitons per photon, as information as a way to preserve the continuity of
values from a prior universe to the present universe. The author claims that in order to do this rigorously, that use of the material in Gutt, and Waldmann [18] (“Deformation of the Poisson bracket on a sympletic manifold”) as of 2006 will be necessary, especially to recover quantization of severely curved space time conditions which add more detail to
. Having said this, it is now important to consider what can be said about how relic gravitons/information can pass through minimum vales of
.
We shall reference what the A. W. Beckwith (2008) [16] presented in 2008 STAIF, which we think still has current validity for reasons we will elucidate upon in this document. We use a power law relationship first presented by Fontana [38] (2005), who used Park’s earlier (1955) [39] derivation: when
. (40)
This expression of power should be compared with the one presented by Massimo Giovannini (2008) [39] on averaging of the energy-momentum pseudo tensor to get his version of a gravitational power energy density expression, namely
. (41)
Giovannini [39] states that should the mass scale be picked such that
, that there are doubts that we could even have inflation. However, it is clear that gravitational wave density is faint, even if we make the
approximation that
as stated by Linde (2008) [40] , where we are
following
in evolution, so we have to use different procedures to come up with relic gravitational wave detection schemes to get quantifiable experimental measurements so we can start predicting relic gravitational waves. This is especially true if we make use of the following formula for gravitational radiation, as given by L. Kofman, et al. (2009) [41] , with
as the energy scale, with a stated initial inflationary potential V. This leads to an initial approximation of the emission frequency, using present-day gravitational wave detectors.
. (42)
What we would like to do for future development of entropy would be to consider a way to ascertain if or not the following is really true, and to quantify it by an improvement of a supposition advanced by Kiefer, Polarski, and Starobinsky as of (2000) [42] , i.e. the author, Beckwith, has in this document presented a general question of how to avoid having dS/dt = ∞ at S = 0,
1) Removes any chance that early universe nucleation is a quantum based emergent field phenomena.
2) Goldstone gravitons would arise in the beginning due to a violation of Lorentz invariance, i.e. we have a causal break, and merely having the above condition does not qualify for a Lorentz invariance breakdown.
Kiefer, Polarski, and Starobinsky as of (2000) [42] presented the idea of presenting the evolution of relic entropy via the evolution of phase spaces, with Γ/Γ0 being the ratio of “final (future)”/“initial” phase space volume, for k modes of secondary GW background.
. (43)
If the phase spaces can be quantified, as a starting point of say
, with
being part of how to form the “dimensions” of
, and
part of how to form the dimensions of
, and
being, for a given
, and in certain cases
, then avoiding having dS/dt = ∞ at S = 0 will be straight forward. We hope to come up with an emergent structure for gravitational fields which is congruent with obtaining
naturally, so this sort of procedure is non-controversial, and linked to falsifiable experimental measurement protocol, so quantum gravity becomes a de facto experimental science. We refer the readers to Appendix IV which highlights some of what we think would contribute to experimental gravitational astronomy as we see it.
Acknowledgements
This work is supported in part by National Nature Science Foundation of China grant No. 1137527.
Appendix I. Basic Physics of Achieving Minimum
Precision in CMBR Power Spectra Measurements
Begin first of all looking at
. (1)
This leads to consider what to do with
. (2)
Samtleben et al. (2007) [43] consider then what the experimental variance in this power spectrum, to the tune of an achievable precision given by
. (3)
is the fraction of the sky covered in the measurement, and
is a measurement of the total experimental sensitivity of the apparatus used. Also
is the width of a beam, while we have a minimum value of
which is one over the fluctuation of the angular extent of the experimental survey, i.e. contributions to
uncertainty from sample variance is equal to contributions to
uncertainty from noise. The end result is
. (4)
Appendix II. Cosmological Perturbation Theory and Tensor Fluctuations (Gravity Waves)
Durrer (2004) [44] reviews how to interpret
in the region where we have
, roughly in the region of the Sachs-Wolf contributions due to gravity waves. We begin first of all by looking at an initial perturbation, using a scalar field treatment of the “Bardeen potential”
This can lead us to put up, if
is the initial value of the Hubble expansion parameter
. (1)
And
. (2)
Here we are interpreting A = amplitude of metric perturbations at horizon scale, and we set
, where
is the conformal time, according to
= physical time, where we have a as the scale factor. Then for
, and
, and a pure power law given by
. (3)
We get for tensor fluctuation, i.e. gravity waves, and a scale invariant spectrum with
. (4)
Appendix III. Managing What to Do with Racetrack Inflation, as Cool Down from Initial Expansion Commences
P. Brax, A. Davis et al. [45] devised a way to describe racetrack inflation as a way to look at how super gravity directly simplifies implementing how one can have inflation with only three T (scalar) fields. The benefit to what we work with is that we may obtain two gaugino condensates and look at inflation with a potential given by Brax, et al. (2008) [45]
. (1)
This has scalar fields
as relatively constant and we can look at an effective kinetic energy term along the lines of
. (2)
This ultra simple version of the race track potential is chosen so that the following conditions may be applied:
1) Exist a minimum at
; i.e. we have
, and
, when we are not considering scalar fields
,
2) We set a cosmological constant equal to zero with
,
3) We have a flat saddle at
; i.e.
,
4) We re-scale the potential via
so as to get the observed power spectra
.
Doing all this though frequently leads to the odd situation that
must be small so that
in a race track potential system when we analyze how to fit Equation (1) for flat potential behavior modeling inflation. This assumes that we are working with a spectra index of the form so that if the scalar field power spectrum is
. (3)
Then the spectral index of the inflaton is consistent with WMAP data, i.e. if we have the number of e foldings
. (4)
These sorts of restrictions on the spectral index will start to help us retrieve information as to possible inflation models which may be congruent with at least one layer of WMAP data. This model says nothing about if or not the model starts to fit in the data issues Subir Sarkar [35] identified in is Pune, India lecture in 2007.
Appendix IV. Gravitational Astronomy Issues to Keep in Mind Which This Has to Have Fidelity with Experimental Data Sets
Much of this author’s thinking as to this topic is shaped by thinking as represented by [46] , i.e. what if by cosmological non linear electro dynamics, as an example we do not have an initial cosmological singularity.
We claim that this is materially not that different from [47] , in intention and that readers should attempt to review some of the assumptions in Huang’s reference.
In doing all of this, Corda’s suggestions as to how early universe conditions can be used to investigate the origins of gravity [48] take on a new significance.
As stated earlier, this work has commonality with the idea of Non linear electrodynamics applied to GR, which is seen in [3] .
Note that information theory and its connections to magnetic fields in space time is discussed by the author in [49] .
This has ties into the Loop quantum gravity suppositions, as well [50] .
We also, by tying in our work so closely to the origins of a new magnetic field, which we also state will be important to relic graviton production, give new urgency to necessary reviews of Abbot, and the LIGO team as to the evolving experimental science of gravitational astronomy [51] [52] .
The readers should also review some of the ideas given in [53] .
Our construction is similar to a bridge between pre to post Planckian space-time physics. Note this is in connection to the interior boundary of space-time. And that our supposition will be matched to a causal boundary barrier between the initial boundary of a quantum bubble, and Huang’s super fluid universe, post causal boundary barrier, which we write using the ideas of [47] .
This will allow us to investigate, [48] as far as the origins of Gravity as written up by Christian Corda [49] Camara, C. S., de Garcia Maia, M. R., Carvalho, J. C. and Lima, J. A. S. (2004) Nonsingular FRW Cosmology and Non Linear Dynamics. Arxiv astro-ph/0402311 Version 1, Feb 12, 2004.
That quantum bubble hypothesis [49] [50] is our bridge and we cannot contravene [51] [52] as far as gravitational astronomy as we now know it.
In addition we recommend a review of the construction given in [53] [54] which explicitly discusses causal barriers and their implications, which is a game changer if understood.
(1)
What we hope to do is to find commonality in the ideas given as far as information exchange in this present manuscript and to ultimately tie them into [54] .
We also mention that in terms of the CMBR, that an updated version of this inquiry may also compliment early universe GW searchers [55] and should be reviewed for further upgrades as far as GW astronomy, too, and if this is suitably set up the goal, via gravitational astronomy should be confirmation or rejection of [56] which is still an excellent read.