_{1}

^{*}

This paper shows how increased entropy values from an initially low big bang level can be measured experimentally by counting relic gravitons. Furthermore the physical mechanism of this entropy increase is explained via analogies with early-universe phase transitions. The role of Ng’s revised infinite quantum statistics in the physics of gravitational wave detection is acknowledged. Ng’s infinite quantum statistics can be used to show that
is a starting point to the increasing net universe cosmological entropy. Finally, in a nod to similarities with zero point energy (ZPE) analysis, it is important to note that the resulting
in fact is much lower, allowing for evaluating initial graviton production as an emergent field phenomena, which may be similar to how ZPE states can be used to extract energy from a vacuum if entropy is not maximized. The rapid increase in entropy so alluded to without near sudden increases to 10
^{88} may be enough to allow successful modeling of relic graviton production for entropy in a manner similar to zero point energy (ZPE) energy extraction from a vacuum state. This entropy count is akin to quantum information models used to tell how much “information” may be stored in initial conditions, and transferred from a prior to the present

We make the following chapter outline which will indicate what models of entropy may work. In addition, a de facto caution as to why string theory models may break down at the cosmic singularity is alluded to. In order to start off this analysis, we begin with the following topics in the chapters, one after the other.

2) Does “Entropy” Have an Explicit Meaning in Astrophysics? i.e. (limitations of the Quark-Gluon analogy and how such limitations impact AdS/CFT correspondence [

3) Ng’s infinite quantum statistics [

4) Quantum gas and applications of Wheeler De Witt equation to forming Partition function

5) Brane-antibrane “pairs” and a linkage to Ng’s quantum infinite statistics?

6) Entropy, comparing values from T(u, v) stress energy, black holes, and general entropy values obtainable for the universe.

7) Seth Lloyd’s [

8) Simple relationships to consider (with regards to equivalence relationships used to evaluate T(u, v).

9) Data compression, continuity, and Dowker’s [

10) Controversies of Dark Matter and Dark Energy as called by DM/DE applications to cosmology. How High Frequency Gravitational waves (HFGW) may help resolve them.

This paper will assert that there is a possibility of an equivalence between predicted Wheeler De Witt equation early universe conditions and the methodology of string theory, based upon a possible relationship between a counting algorithm for predicting entropy, based upon an article by Jack Ng [

a way to derive his counting algorithm for entropy). This is due to re stating as entropy

as a numerical graviton density and the expression given by Glinka [^{90} in non dimensional units. This has to be compared to Carroll’s [^{88 }in non dimensional units for observable non dimensional entropy units for the observable universe. Assume that there are over one billion spiral galaxies, with massive black holes in their center, each with entropy 10^{90}, and then there is due to spiral galaxy entropy contributions ^{88} entropy units to contend with for the observed universe. I.e. at least a ten to the eight order difference in entropy magnitude to contend with. The author is convinced after trial and error that the standard which should be used is that of talking of information, in the Shannon sense, for entropy, and to find ways to make a relationship between quantum computing operations, and Shannon information. Making the identification of entropy as being written as

Let us now examine candidates for entropy and discuss their advantages and limitations.

In this inquiry, we should take care not to fall into several pit falls of analysis. We should avoid conflating any conceivable connections of zero point energy extraction, especially ZPE, and fluctuation states of ZPE, with data compression. The two do not mix for reasons which will be elaborated upon in the text. Secondly, the discussion we are embarking upon has no connection with intelligent design. Lossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. If we are to, as an example, reconstruct information pertinent to keeping the same cosmological parameter values of

Alan Heavens et al. [

We have reviewed A. K. Avessian’s [

K. Avessian’s article [

The idea is that we are assuming a granular, discrete nature of space time. Furthermore, after a time we will state as t ~ t_{Planck} there is a transition to a present value of space time, which is then probably going to be held constant.

It is easy to, in this situation, to get an inter relationship of what

This leads to us asking what can be done to get a starting value of

We obtain a minimum value as far as “information” via appealing to Hogan’s [

and this can be compared with A. K. Avessian’s article [

I.e. a choice as to how

ball park estimate as to compressed values of

universe, to today’s universe. If

Typically though, the value of the Hubble parameter, during inflation itself is HUGE, i.e. H is many times larger than 1, leading to initially very small entropy values. This means that we have to assume, initially, for a minimum transfer of entropy/information from a prior universe, that H is negligible. If we look at Hogan’s holographic model, this is consistent with a non finite event horizon

This is tied in with a temperature as given by

Nearly infinite temperatures are associated with tiny event horizon values, which in turn are linked to huge Hubble parameters of expansion. Whereas initially nearly zero values of temperature can be arguably linked to nearly non existent H values, which in term would be consistent with

If, in this situation, that N is proportional to entropy, i.e. N as-number of entropy states to consider, then as H drops in size, as would happen in pre inflation conditions, we will have opportunities for N ~ 10^{5}

The peak temperature as recorded by Weinberg [^{32} Kelvin, and that would imply using the expansion parameter, H, as given by Equation (5) above. Likely before the onset of inflation, due to dimensional arguments, it can be safe to call the pre inflation temperature, T as very low. I.e. there was a build up of temperature, T, at the instant before inflation, which peaked shortly afterwards. Such an eventuality would be consistent with use of a worm hole bridge from a prior to a present universe. Beckwith [

Here, C is the inverse Fourier transform, and w (,) is a weight function, and p, and q are canonical variables

fitting into

as refinement of Poisson brackets, the above, Weyl quantization is , as noted by S. Gutt and S. Waldemann [

As well as for Poisson brackets,

For very small regimes of spatial integration, we can approximate Equation (7) as a finite sum, with

What we are doing is to give the following numerical approximate value of, de facto, as follows

the inverse transform is a form of data compression of information. Here, we will state that

where R refers to a spatial distance from the center of a spherical universe. Appendix II is an accounting of what is known as a pseudo time dependent solution to the Wheeler de Witt equation involving a wormhole bridge between two universes. The metric assumed in Appendix I is a typical maximally symmetric metric, whereas Appendix II is using the Reisssner-Nordstrom metric. We assume, that to first order, if the value of R in

much information from a prior universe, to our own may be addressed, and that the solution

To begin this inquiry, it is appropriate to note that we are assuming that there is a variation in the value of

[

of^{12} gravitons, then, if

Planck’s length, gives us a flux value as to how many gravitons/entropy units are transmitted. The key point is that we wish to determine what is a minimum amount of information bits/attendant entropy values needed for transmission of^{−60} grams per graviton, then to get at least one photon, and if we use photons as a way of “encapsulating”^{12} gravitons/entropy units with each graviton, in the beginning being designated as one “carrier container” of information for one unit of^{21} gravitons introduced during the onset of inflation, this means a minimum copy of about one billion

For those who doubt that 10^{−60} grams per rest mass of a graviton can be reconciled with observational tests with respect to the Equivalence Principle and all classical weak-field tests, we refer the readers to Visser’s article about “Mass for the graviton” [

What is being alluded to, is that variations in the AdS/CFT correspondence applications exist from what is usually assumed for usual matter. The differences, which are due to quark-gluon plasma models breaking down in the beginning of the big bang point to the necessity of using something similar to the counting algorithm as introduced by Ng, as a replacement for typical string theory models in strict accordance to AdS/CFT correspondence.

The goal of exploring the degree of divergence from AdS/CFT correspondence will be in quantifying a time sequence in evolution of the big bang where there is a break from causal continuity.

A break down in causal continuity [

The problem with implying data is compressed, is that this, at least by popular imagination implies highly specific machine/IT analogues. We wish to assure the readers that no such appeal to intelligent design/deity based arguments is implied in this document.

A point where there is a breakage in causal continuity will help determine if or not there is a reason for data compression. In computer science and information theory, data compression or source coding is the process of encoding information using fewer bits (or other information-bearing units) than an unencoded representation would use through use of specific encoding schemes. Using fewer bits of an encoding scheme for “information” may in its own way allow data compression. We need to have a similar model for explaining the degree of information transferred from a prior universe, to the present, while maintaining the structural integrity of the basic cosmological parameters, such as

Again, while avoiding the intelligent design analogies, it is possible to imply that if there was a restriction of information to dimensions other than the typical space time dimensions of four space, with fifth and higher dimensions being our information conduit, that by default, data compression did occur during the restriction of much of the information encoded in kink-anti kink gravitons disappearing before the big bang in four space, and then re appearing in our present day four space geometry, as a spill over from a fifth dimension.

Since there is a problem physicists, writers, and editors have with any remote degree of ambiguity, let us briefly review what is known about singularity theorems for GR, in four space. Then make a reasonable extrapolation in fifth space embedding of the four dimensions, to make our point about singularities in four space more understandable.

Feinstein, et al. [

This will be done, especially when entropy is held to be in tandem with a “particle count” of instanton-anti instanton packaged gravitons as the mechanism for increase of entropy from a much lower level to today’s level. To begin this analysis, let us look at what goes wrong in models of the early universe. The assertion made is that this is due to the quark-Gluon model of plasmas having major “counting algorithm” breaks with non counting algorithm conditions, i.e. when plasma physics conditions BEFORE the advent of the Quark gluon plasma existed. Here are some questions which need to be asked [

1. Is QGP strongly coupled or not? Note: Strong coupling is a natural explanation for the small (viscosity).

Analogy to the RHIC: J/y survives deconfinement phase transition.

2. What is the nature of viscosity in the early universe? What is the standard story? (Hint: AdS-CFT correspondence models). Question 2 comes up since [

Typically holds for liquid helium and most bosonic matter. However, this relation breaks down. At the beginning of the big bang. As follows.

I.e. if Gauss-Bonnet gravity is assumed, in order to still keep causality, one needs to have Equation (11) satisfied.

This even if one writes for a viscosity over entropy ratio the following [

A careful researcher may ask why this is so important. If a causal discontinuity as indicated means the

ratio is

collisions due to WHAT physical process? Recall the argument put up earlier. I.e. the reference to causal discontinuity in four dimensions, and a restriction of information flow to a fifth dimension at the onset of the big bang/transition from a prior universe? That process of a collision increase may be inherent in the restriction to a fifth dimension, just before the big bang singularity, in four dimensions, of information flow. In fact, it very well be true, that initially, during the process of restriction to a 5^{th} dimension, right before the big bang, that

in geometric “sizing” may become effectively nearly infinite.

It is due to the following qualifications put in about Quark-Gluon plasmas which will be put up, here. Namely, more collisions imply less viscosity. More Deflections ALSO implies less viscosity. Finally, the more momentum transport is prevented, the less the viscosity value becomes. Say that a physics researcher is looking at viscosity due to turbulent fields. Also, perturbatively calculated viscosities: due to collisions. This has been known as Anomalous Viscosity in plasma physics, [

So happens that RHIC models for viscosity assume [

As Akazawa [

If the temperature T wildly varies, as it does at the onset of the big bang, this breaks down completely. This development is Mission impossible: why we need a different argument for entropy. I.e. Even for the RHIC, and in computational models of the viscosity for closed geometries-what goes wrong in computational models

• Viscous Stress is NOT µ shear

• Nonlinear response: impossible to obtain on lattice ( computationally speaking)

• Bottom line: we DO NOT have a way to even define SHEAR in the vicinity of big bang!!!!

We now need to ask ourselves what may be a way to present entropy/entropy density in a manner which may

be consistent with having/explaining how

how the entropy/entropy density may become extraordinarily large, and that, outside of the restriction to a fifth dimension argument mentioned earlier for “information” transferal to the onset of the big bang, that it is not necessary to appeal to nearly infinite collisions in order to have a drop in viscosity. This will lead to Ng’s “particle count” algorithm [

It is useful to state this convention for analyzing the resulting entropy calculations, because it is a way to explain how and why the number of instanton-anti instanton pairs, and their formulation and break up can be linked to the growth of entropy. If, as an example, there is a linkage between quantum energy level components of the quantum gas as brought up by Glinka [

Furthermore, finding out if or not it is either a drop in viscosity, when

entropy density may tell us how much information is, indeed, transferred from a prior universe to our present. If it is

This leaves open the question of if or not there is “infinite” density of ordinary matter, or if or not there is a fifth dimensional leakage of “information” from a prior universe to our present. If there is merely infinite “density”, and possibly infinite entropy “density/disorder at the origin, then perhaps no information from a prior universe is transferred to our present universe. On the other hand, having

Details about this have to be worked out, and this because that as of present one of the few tools which is left to formulation and proof of the singularity theorems is the Raychaudhuri equation, which describes the divergence θ of a congruence (family) of geodesics, which has a lot of assumptions behind it, as stated by Dadhich [

Let us first summarize what can be said about Ng’s quantum infinite statistics. Afterwards, the numerical counting involved has a direct connection with the pairs of brane-anti brane (kink-anti kink) structures Mathur and others worked with to get an entropy count. Ng [

This, according to Ng, leads to entropy of the limiting value of

But

The supposition is that the value of N is proportional to a numerical DM density referred to as

HFGW would play a role if

linkage made between

Such a linkage would open up the possibility that the density of primordial gravitational waves could be examined, and linked to modeling gravity as an effective theory, as well as giving credence to how to avoid dS/dt = ∞ at S = 0. If so, then one can look at the research results of Samir Mathur [

This suggests that entropy scaling is proportional to a power of the vacuum energy, i.e., entropy ~ vacuum energy, if

Traditionally, minimum length for space-time benchmarking has been via the quantum gravity modification of a minimum Planck length for a grid of space-time of Planck length, whereas this grid is changed to something

bigger

entropy.

This expression for entropy (based on the number of brane-anti-brane pairs) has a net energy value of

Equation (22) can be changed and rescaled to treating the mass and the energy of the brane contribution along the lines of Mathur’s^{ }CQG [

The windings of a string are given by figure 6.1 of Becker et al [

We start off with looking at Vacuum energy and entropy. This suggests that entropy scaling is proportional to a power of the vacuum energy, i.e., entropy ~ vacuum energy, if is interpreted as a total net energy proportional to vacuum energy, i.e. go to Equation (10) above. What will be done is hopefully, with proper analysis of T(u, v) at the onset of creation, is to distinguish, between entropy say of what Mathur [

This needs to be compared with the entropy of the universe, as given by Sean Carroll, as stated by [

The claim made here is that if one knew how to evaluate T(u, v) properly, that the up to 10^{9} difference in Equations (25) and (26) will be understandable, and that what seems to be dealt with directly. Doing so is doable if one understands the difference/similarities in Equations (21), (23), and (24), above. So, how does one do this? The candidate picked which may be able to obtain some commonality in the different entropy formalisms is to confront what is both right and wrong in Seth Lloyd’s entropy treatment in terms of operations as given below. Furthermore, what is done should avoid the catastrophe inherent in solving the problem of dS/dt = ∞ at S = 0 as stated to the author in a presentation he saw [

By necessity, entropy will be examined, using the equivalence between number of operations which Seth Lloyd used in his model, and total units of entropy as the author referenced from Carroll, and other theorists. The key equation Lloyd [

Lloyd is making a direct reference to a linkage between the number of operations a quantum computer model of how the Universe evolves is responsible for, in the onset of a big bang picture, and entropy. If Equation (27) is accepted then the issue is what is the unit of operation, i.e. the mechanism involved for an operation for assembling a graviton, and can that be reconciled with T(0, 0) as could be read from Equation (28) below.

What needs to be understood and evaluated is, if there is a re structuring of an appropriate frame of reference for T(u, v) and its resultant effects upon how to reconcile black hole entropy, a.k.a. Equation (25) with Equation (26) and Equation (27). A good place to start would be to obtain T(u, v) values which are consistent with slides on the two way versus one way light speed presentation of the ISEG 2009 conference. We wish to obtain T(u, v) values properly analyzed with respect to early universe metrics, and PROPERLY extrapolated to today so that ZPE energy extraction, as pursued by many, will be the model for an emergent field development of entropy. Note the easiest version of T(u, v) as presented by [

What is affected by Unnikrishnan’s presented (2009) [

While stating this, it is important to keep in mind that Wald defines [

We defined the operation, where A is a bounded operator, and < > an inner product via use of [

The job will be to keep this same equivalence relationship intact for space time, no matter what is done with the metric g(a, b) in the T(a, b) expressions we work with, which will be elaborations of Equation (28) above.

This is closely tied in with data compression and how much “information” material from a prior universe is transferred to our present universe. In order to do such an analysis of data compression and what is sent to out present universe from a prior universe, it is useful to consider how there would be an eventual increase in information/entropy terms, from 10^{21} to 10^{88}. Too much rapid increase would lead to the same problem ZPE researchers have. I.e. if Entropy is maximized too quickly, we have no chance of extracting ZPE energy from a vacuum state, i.e. no emergent phenomena is possible. What to avoid is akin to [

Equation (32) is from Giovanni, and it states that all entropy in the universe is solely due to graviton production [^{88} entropy “units” created right at the onset of the big bang. This does NOT happen.

What will eventually need to be explained will be if or not 10^{7} entropy units, as information transferred from a prior big bang to our present universe would be enough to preserve ^{7} entropy/information units are exchanged via data compression from a prior to our present universe, Equation (27), and resultant increases in entropy up to 10^{88} entropy “units” will involve the singular rity theorems of cosmology, as well as explanations as to how

do so, that Equation (27), and a mechanism for the assembly of gravitons from a kink-anti kink structure is a de rigor development. We need to find a way to experimentally verify this tally of results. And to find conditions under which the abrupt reformulation of a near-constant cosmological constant, i.e., more stable vacuum energy conditions right after the big bang itself, would allow for reformulation of SO (4) gauge-theory conditions.

What Is the Bridge between Low Entropy of the Early Universe and Its Rapid Build up LaterPenrose in a contribution to a conference, [^{10} Hertz (shown in

We are attempting to find an appropriate phase transition argument for the onset of entropy creation and the right graviton production expression needed [

Is akin to explaining how, and why temperature changes in T, lead to, if the temperature increases, an emergent

field description of how gravitons arose. We claim that this is identical to obtaining a physically consistent description of entropy density would be akin to, with increasing, then decreasing temperatures a study as to how kink-anti kink structure of gravitons developed. This would entail developing a consistent picture, via SO (4) theory of gravitons being assembled from a vacuum energy back ground and giving definition as to Seth Lloyd’s computation operation [^{−30} δm/m. Once this is obtained, the evolution of cosmological structure can be investigated properly, with the following as targets of opportunity for smart applications of HFGW detectors.

Let us first reference what can be done with further developments in deformation quantization and its applications to gravitational physics. The most noteworthy the author has seen centers upon Grassman algebras and deformation quantization of fermionic fields. We note that Galaviz [

and this is a problem which would, among other things permit a curved space treatment of

should be aware that as spatial variable R gets to be the order of

field theory quantization condition for

path integrals. We claim that deformation quantization, if applied successfully willeventually lead to a great refinement of the above Wheeler De Witt wave functional value, as well as allow a more through match up of a time independent solution of the Wheeler De Witt equation, as given in Appendix I. We claim that the linkage between time independent treatments of the wave functional of the universe, with what Lawrence Crowell [^{12} gravitons per photon, as information as a way to preserve the continuity of

tons/information can pass through minimum vales of

This expression of power should be compared with the one presented by Giovannini [

It is important to note that Giovannini [

different procedures to come up with relic gravitational wave detection schemes to get quantifiable experimental measurements so we can start predicting relic gravitational waves. This is especially true if we make use of the following formula for gravitational radiation, as given by Koffman [

We will be trying to get to the bottom of what was listed, especially in the DM/DE conundrums, probably by judicious application of good high frequency detector work.

If we obtain good data, via appropriate work, we intend to, confirm via measurements the following

tion (36) is not affected by the Graceful exit (from inflation) as mentioned in [

Note that this research development should include the research results by Dr. Corda as of [

Furthermore we should note that the development of confirmation of

The author wishes to thank Dr. Fangyu Li as well as Stuart Allen, of international media associates whom freed the author to think about physics, and get back to his work. This work is supported in part by National Nature Science Foundation of China grant No. 110752.

Andrew W. Beckwith, (2016) Relic Entropy Growth and Initial Big Bang Conditions, as a Subset of Quantum Information. Journal of High Energy Physics, Gravitation and Cosmology,02,392-411. doi: 10.4236/jhepgc.2016.23035

This is a re capitulation of what is written by S. Capoziello, [

so that

Similarly,

Also, when

Realistically, in terms of applications, we will be considering very small x values, consistent with conditions near a singularity/wormhole bridge between a prior to our present universe. This is for

We are using reference [

This has when we do it

In order to do this, we can write out the following for the solutions to Equation (1) above.

And

This is where

so happens that this is for forming the wave functional that permits an instanton to form. Next, we should consider whether or not the instanton so formed is stable under evolution of space-time leading up to inflation. To model this, we use results from Crowell (2005) on quantum fluctuations in space-time, which gives a model from a pseudo time component version of the Wheeler-De-Witt equation, with use of the Reinssner-Nordstrom metric to help us obtain a solution that passes through a thin shell separating two space-times. The radius of the shell

This has:

This assumes that the cosmological vacuum energy parameter has a temperature dependence as outlined by Park (2003), leading to

as a wave functional solution to a Wheeler-De-Witt equation bridging two space-times. This solution is similar to that being made between these two space-times with “instantaneous” transfer of thermal heat, as given by Crowell (2005)

This has _{1} = Equation (3) above, and C_{2} = Equation (4) above. Equation (8) is an approximate solution to the pseudo time dependent Wheeler-De-Witt equation. The advantage of Equation (8) is that it represents to good first approximation of gravitational squeezing of the vacuum state.

Submit or recommend next manuscript to SCIRP and we will provide best service for you:

Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc.

A wide selection of journals (inclusive of 9 subjects, more than 200 journals)

Providing 24-hour high-quality service

User-friendly online submission system

Fair and swift peer-review system

Efficient typesetting and proofreading procedure

Display of the result of downloads and visits, as well as the number of cited articles

Maximum dissemination of your research work

Submit your manuscript at: http://papersubmission.scirp.org/