Using “Enhanced Quantization” to Bound the Cosmological Constant, and Computing Quantum Number n for Production of 100 Relic Mini Black Holes in a Spherical Region of Emergent Space-Time ()
1. Basic Idea, Can Two First Integrals Give Equivalent Information?
We admit this paper has some similarity to [1] ; what we will do is, instead of using the Hamber result of [2] as to a first integral, using what John Klauder wrote in [3] as to form a first integral in order to make a 1 to 1 equivalence with the first integral associated with general relativity [4] [5] . As what was done in [1] we have a 1 to 1 relationship between two first action integrals, i.e. and the idea is to avoid a point cosmic singularity, but instead has a regime of space-time incorporating the idea of a cosmic bounce, as given in [6] with interior and exterior regimes. i.e. this also overlaps with work done by the author in [7] with the caveat that there is a barrier between interior and exterior regimes of space-time and that we are evaluating the space in the interior of a space-time bubble. The integrands in the two integrals are assumed to have a 1-1 relationship to one another. And we will in the next section identify the two first integrals.
2. Now for the General Relativity First Integral. From [1]
We use the Padmanabhan 1st Integral [8] of the form, with the third entry of Equation (1) having a Ricci scalar defined via [9] and usually the curvature
set as extremely small, with the general relativity version of, from [1]
(1)
Also, the variation of
as given by [10] [11] will have an inflaton,
given by [9]
(2)
Leading to [1] [9] to the inflaton which is combined into other procedures for a solution to the cosmological constant problem.
(3)
Here, we have that
is a minimum value of the scale factor presumably given by [12] as a tiny but non-zero value. Or at least a quantum bounce as given by [1] .
3. Next for the Idea from Klauder
We are going to go to page 78 by Klauder [3] as to his idea of what he calls on page 78 a restricted Quantum action principle which he writes as:
where we then write a 1-1 equivalence as in [1] so that
(4)
Our assumption is that
is a constant, hence we assume then the following, i.e. a Pre Planckian-instant of time, say some power of Planck Time length, hence getting the following approximation
(5)
4. Filling in the Details of the above Using Details from [3] with Explanations
To do this, we are making several assumptions.
1) That the two mentioned integrals are evaluated from a Pre Planckian to Planckian space-time domain. i.e. in the same specified integral of space-time.
2) That in doing so, the Universe is assumed to avoid the so called cosmic singularity. In doing so assuming a finite “Pre Planckian to Planckian” regime of space time like that given in [1] . With reference also, to the cosmic bounce given in [7] .
3) Assuming that even in the Pre Planck-Planck regime that curvature
will be a very small part of Ricci scalar
and that to first approximation even in the Plank time regime, that to first order [13] has a value altered to be
(6)
Furthermore, we can make assumptions as to the nature of the cosmic bubble, in assuming that there is a barrier between the Pre-Planckian to Planckian physics regimes so that we have a quantum mechanical style potential well, so to speak in evaluation of the [7] reference which has then if we use Klauder’s [3] notation that N represents the strength of the wall, i.e. the Pre Planckian to Planckian bubble boundary
(7)
Our innovation is to then equate
and to assume small time step values. Then
(8)
These are terms within the bubble of space-time given in [7] using the same inflaton potential. The scale factor is presumed here to obey the value of the scale factor given in [12] .
5. Why This Is Linked to Gravity/Massive Gravitons
Klauder’s program is to isolate a regime of space time for a proper canonical quantization of a classical system. i.e. what we did is to utilize the ideas of [3] to make the identification of Equation (7) which when combined with inflaton physics to have enhanced quantization of the often assumed to be classical inflaton, as given in Equation (3). i.e. to embed via Equation (7) as a quantum mechanical well for a Pre Planckian-system for inflaton physics as given by Equation (3). In short, the scaling of our problem for a bound as to the cosmological constant, in Pre Planckian-space-time, as given in Klauder’s treatment of the action integral as of page 87 of [3] where Klauder talks of the weak correspondence principle, where an enhanced classical Hamiltonian, is given 1-1 correspondence with quantum effects, in a non-vanishing fashion.
i.e. for the sake of Argument we will make the following assumptions which may be debatable, i.e.
is approximately a constant (9)
For extremely small-time intervals (in the boundary between Pre Planckian to Planckian physical boundary regime). As given in [11] . This approximation is why the author assumes Equation (9).
(10)
If so, if we through this procedure, make a linkage directly to the mass of a graviton, as given by Novello [13] ,
(11)
This is a way, then to ascertain a bound, based upon the early universe conditions so set forth, as a way to ascertain a bound to the effective heavy graviton
6. Reviewing Multiverse Generalization of the CCC of Penrose, and Suggestions as to a Uniform Bound to the Graviton, per Cyclic Conformal Cosmology Cycle, and How This Relates to Reference [1] ’s Conclusions
We are extending Penrose’s suggestion of cyclic universes, black hole evaporation, and the embedding structure our universe is contained within. This multiverse embeds BHs and may resolve what appears to be an impossible dichotomy. The following is largely taken from [14] and has serious relevance to the final part of the conclusion. That there are no fewer than N universes undergoing Penrose “infinite expansion” (Penrose) [15] contained in a mega universe structure. Furthermore, each of the N universes has black hole evaporation, with the Hawking radiation from decaying black holes. If each of the N universes is defined by a partition function, called
, then there exist an information ensemble of mixed minimum information correlated as about 107 - 108 bits of
information per partition function in the set
, so minimum
information is conserved between a set of partition functions per universe
(`12)
However, there is non-uniqueness of information put into each partition function
. Furthermore, Hawking radiation from the black holes is collated via a strange attractor collection in the mega universe structure to form a new big bang for each of the N universes represented by
. Verification of this mega structure compression and expansion of information with a non-uniqueness of information placed in each of the N universes favors ergodic mixing treatments of initial values for each of N universes expanding from a singularity beginning. The
value, will be using (Ng, 2008)
. [16] . How to tie in this energy expression, as in Equation (12) (30) will be to look at the formation of a nontrivial gravitational measure as a new big bang for each of the N universes as by
the density of states at a given energy
for a partition function. (Poplawski, 2011) [17]
. (13)
Each of
identified with Equation (13) above, are with the iteration for N universes (Penrose, 2006) [15] . Then the following holds, namely, this is taking a nod to the unpredictability of black hole physics, as given in [18] by Hawking, by asserting the following claim to the universe, as a mixed state, with black holes playing a major part, due to the CCC cosmological picture, by starting off with
Claim 1,
(14)
For N number of universes, with each
for j = 1 to N
being the partition function of each universe just before the blend into the RHS of Equation (14) above for our present universe. Also, each of the independent
universes given by
are constructed by the absorption of
one to ten million black holes taking in energy, i.e. (Penrose) [14] [15] . Furthermore, the main point is similar to what was done in [19] in terms of general ergodic mixing
Claim 2
(15)
What is done in Claim 1 and Claim 2 is to come up with a protocol as to how a multi dimensional representation of black hole physics enables continual mixing of spacetime [19] largely as a way to avoid the Anthropic principle, as to a preferred set of initial conditions.
Claim 2 is particularly important. The idea here is to use what is known as CCC cosmology, which can be thought of as the following. First, have a big bang (initial expansion) for the universe. After redshift z = 10, a billion years ago, SMBH formation starts. Matter-energy is vacuumed up by the SMBHs, which at a much later date than today ( present era) gather up all the matter-energy of the universe and recycles it in a cyclic conformal translation, as follows, namely
(16)
(17)
C1 is, here a constant. Then
The main methodology in the Penrose proposal has been in Equation (17) evaluating a change in the metric
by a conformal mapping
to
(18)
Penrose’s suggestion has been to utilize the following [18] [20]
(19)
In fall into cosmic black hopes has been the main mechanism which the author asserts would be useful for the recycling apparent in Equation (19) above with the caveat that
is kept constant from cycle to cycle as represented by
(20)
We claim that Equation (20) combined with Equation (11) above gives a good indication of a uniform mass to a graviton, per cycle, as far as heavy gravity, provided that Equation (20) holds’ Note that all these above results should be compared with the initial Hamber based results [2] which lead to an initial idea we give as given in [1] which we duplicate below, i.e. we claim we have kept full fidelity with this program and improved on it. Quoting from [1] : first of all, we have what is known as a scale factor
, which is nearly zero, in the Pre Planckian regime of space-time, and equal to 1 in the present era. A good reference as to the physics behind how we set up
is [20] [21] . In addition we will define, for the purpose of analysis, of the integrals, the following symbols as given in [2] , for the Quantum paths sensitive first integral, with
(21)
These are the purported volume elements of the [2] first integral. The second first integral is using the usual GR inputs as defined by Padmanbhan in [8] [9] . To review what is meant by first integrals we refer the readers to [22] [23] [24] . Roughly put, a Lagrangian multiplier invokes a constraint of how a “minimal surface” is obtained by constraining a physical process so as to use the idea of [22] [23] [24] which invokes the idea of minimization of a physical processes. In the case of [23] , the minimization process is implicitly that, if
were a scale factor as defined by Roos, [20] and if
were a time component of a metric tensor, which we will later define. Here, the subscripts 3 and 4 in the volume refer to 3 and 4 dimensional spatial dimensions, and this will lead to us writing, via [2] a 1st integral as defined by [1] [2] , in the form, if G is the gravitational constant, that if we have following [1] [2] , a first integral defined by
(22)
This should be compared against the Padmabhan 1st integral [8] [9] of the form, with the third entry of Equation (3) having a Ricci scalar defined via [5] and usually the curvature
[5] set as extremely small, with the general relativity version of
(23)
End of quote, from [1] .
Our presentation uses all this, and aligns it with the ideas of the Klauder Enhanced quantization [3] for what we think is a better extension of the same idea. We claim that what we have done improves upon this idea, and is in full fidelity with the FFP 15 presentation, with an additional refinement added in. In [1] , we make the following argument, from [1] , Equation (24), Equation (25), Equation (26) and Equation (27) are from reference [1] as given below:
In order to obtain maximum results, we will be stating that the following will be assumed to be equivalent.
(24)
i.e.
(25)
and
(26)
End of quote from [1] . So, we argue that we are, as given in [1] where we have, from [1] the following: Quote again from [1] ,
Simply put a relationship of the Lagrangian multiplier giving us the following:
(27)
End of quote from [1] .
We are obtaining the exact same physics, as in [1] for when we appeal to Equation (8) as a bound to the enhanced quantization, hence we have extended our basic idea via use of [1] and [3] .
So now we will go to how this affects the mass of black holes.
7. Effect These Procedures Have on Initial Black Holes, Available Mass Value
What we will be examining is that we have from [25] a relic mass for black holes given by Formulae (28) below which would lead to
(28)
If we make use of Equation (11) and assume that we have
(29)
Note then that we will be writing having
as given by Equation (8) above, and we also define having a minimum scale mass, in an argument due to Non-Linear Electrodynamics we will define in the next section. Before doing that, we will state that we have the following formula as to Black hole mass
(30)
We will be commenting upon, how one could have a restriction of Equation (30) values so then we have Black holes on the order of 100 times a Planck mass. Before doing so, we will comment upon acceptable values for the minimum scale factor.
8. Putting in a Minimum Scale Factor in, According to NLED Showing a Non-Zero Initial Radius of the Universe Due to Non Linear Space-Time E & M
What we are asserting is, in [26] there exists a scaled parameter
, and a parameter
which is paired with
. For the sake of argument, we will set the
, with
seconds. Also,
is a cosmological “constant” parameter, with, from [26]
(31)
And also set
(32)
Then
(33)
In this situation we make the following assumption
(34)
Here we will be assuming that the second derivative of the scale factor, with respect to time, and then again divided by the scale factor is, in the: Planckian regime, very large, i.e. that we have a huge initial acceleration of expansion of the universe, i.e. the point to note is that this will be in the Pre Planckian to: Planckian expansion, and is for obtaining a small positive value for Equation (34) and this also relies upon Equation (2) for the negative contribution to the cosmological “constant” as given above. We will do this so this work dove tails with Dr. Corda’s recent work which is given in [25] but in order to do it, we will refer to and explain how we got from our investigations a minimal scale factor.
Whenever one sees the coefficient like the magnetic field, with the small 0 coefficient, for values of
, this should be the initial coefficient at the beginning of space-time which helps us make sense of the nonzero but tiny minimum scale factor [26]
(35)
The minimum time, as referenced in Equation (33) most likely means, due to
that Equation (35) is of the order of about 10−55, i.e. 33 orders of magnitude smaller than the square root of Planck time, in magnitude In addition, it is prudent to note that the magnetic field is due to arguments given in [27] on page 21 of that document, which would argue in favor of a very substantial B field initially.
9. Examining Δt from the Vantage Point of a Minimum Scale Factor Calculation
We first write in using [28] that we have
(36)
To do this, we have that interpretation of Equation (36) will lead to the following linkage of scale factor of the Universe, minimum, and the time derivative of the inflaton field, for the Pre-Planckian regime, about the Causal structure as given in Equation (36) above, mainly, then
(37)
This is for a minimum time step, t, which in our re write is, then
(38)
(39)
Implying for a value right at the causal boundary of space time, i.e. the bounce radii of emergent
(40)
This Equation (40) should be directly compared with our Equation (29) and our claim is that the two values are in this case, the same. This uses [29] and [30] with
for initial degrees of freedom set so then we have a way to set the early universe temperature and entropy via the comparison
(41)
This will, if we utilize [29] tie in with a graviton production expression we give as, if d is the extra dimensions of assumed Kaluza-Klein space-time
(42)
In our case, d is set initially equal to zero, and we have that the temperature T, is configured so that if the mass scale of M as given above is say 30 TeV, so then that we have
(43)
In doing so, we have that entropy, in this case is using Ng infinite quantum statistics according to
As used by Ng [31]
(44)
This, according to Ng [31] , leads to entropy of the limiting value of, if
will be modified by having the following done, namely after his use of quantum infinite statistics
(45)
In our review, if there are say 100 black holes, each of mass 10^2 times Planck mass, we obtained, roughly an entropy of about 10^3, initially, i.e. a low entropy of about 10 per each black hole so generated, and this will lead, then to the conclusion which we outline below.
10. Relevance to Black Hole Production, and Quantum Numbers, n and n − 1, Where n Is the Quantum State Used to Penetrate Past the Initially Assumed Shell Bounce Barrier Separating Pre Planckian to Planckian Physics
Our assumption is that the Lagrangian multiplier is roughly equivalent to a mass which is about 10^4 times the mass of a Planck sized black hole mass, i.e. that we have Black holes initially produced which are of say 10^2 times the Planck mass.
In Corda’s recent work [25] , we have a so-called Horizon volume, where n is the so called quantum number n put in where Planck mass is normalized to 1, so then, if there are 10^2 black holes of mass 10^2 times Planck mass (will set Planck mass, as 1)
(46)
Here we make the following simplification of Equation (47) to read as
(47)
Our supposition is that there are 10^2 mini black holes, and a mass of 10^2 times Planck mass, per each black hole, so that we perform the following normalization, i.e. find n for quantum number, so that to first approximation
(48)
i.e. that say 1000 times Planck length, we have the beginning of say creation of 100 mini black holes, each of mass about 100 times Planck mass which would put a huge restriction upon the admissible value, n, whereas giving a quantum value, n, for the enhanced quantum perturbation, used for penetration of the initial quantum state so assumed in this document as we go from Pre Planckian to Planckian physics, by emergent field construction. Equation (48) could be used to ascertain a quantum value, n, which would be for quantization level used to penetrate beyond the shell used to create the cosmological constant modeled in Equation (8), whereas the entire mass, of initially formed black holes, roughly 10^2 times Planck mass, would be also scaled to Equation (27).
The idea of using the Corda result [25] would be to delineate the quantum value, n, of relic black holes and a quantum state commensurate with penetrating between Pre Planckian to Planckian physics regimes.
Secondly, if there is, say 10 gravitons produced, per relic black hole, and 100 relic black holes say in a sphere of about 1000 radii times Planck distance, we can by Ng Infinite quantum statistics, [32] as has been done by the author time and time again (entropy as a counting algorithm) of black holes creating entropy, use this above procedure to estimate an initially generated entropy of the order of 10^3, in the immediate aftermath of black hole production, with n, as calculated by Equation (48), as for the production of initial entropy. We argue that all the above, will if we equate the nucleated black hole mass, of 100 relic black holes as proportional to Equation (27) lead to an integrated version of initial mass-energy which is for conditions where the initial cosmological constant is set in our present universe.
11. Conclusion: Future Work Projects and Extension of These Results
A serious work project should be in examining the role and implications of Equation (11) as compared to the extensive LIGO bounds on graviton mass. LIGO in [33] [34] [35] [36] has extensively outlined the physics of experimental gravitational wave astronomy, and in particular, [32] has outlined how theoretical predictions of 5th force models may overlap with the results of [36] .
For our purposes, reference [36] has the following wavelengths of purported GW, i.e. from 10^12 Kilometers to an outsized 10^22 Kilometers in length, which yield staggeringly low GW frequencies, whereas a parsing of either the upper or lower bound of these values, which has a range of 10^6 in variance, has to be determined and worked out. i.e. The task of future LIGO should be to cut down to a minimum upper bound which may be experimentally confirmed which may give a lower variance than what is given in page 12 of [36] .
The physics choice of what is the optimal range of admitted GW wavelengths will seriously impact a future study of permitted inputs into Equation (11) above, and we hope in addition to [37] and [38] are given due consideration as to elucidating the proper bounds to graviton mass, and what constitutes a suitable construction as to examining the questions as to the origins of Gravity, which is a particular research interest of Dr. Corda, that the issues which Corda brought up.
Finally, in [39] on pages 63 to 68, of that manuscript, there is a study done as to the creation of primordial black holes. The authors as of [39] have presumed data as to particle production due to black holes, created during primordial conditions, as well as delibrations as to their type. This discussion also includes gravitons, with the difference that the gravitons in [39] are presumed to be MASSLESS.
A worthy project would be to revisit the assumptions given in [39] as to graviton production, by black holes, but to include instead, a small graviton mass, to the emerging black hole production of gravitons.
This can also be compared with the considerations given in Padmanabhan’s long article, [40] whereas he states in his conclusion, page 100, that
Quote
The existence of degenerate vacua introduces an additional feature as regards the cosmological constant [35] . The problem arises from the fact that quantum theory allows tunneling between the degenerate vacua and makes the actual ground state a superposition of the degenerate vacua. There will be an energy difference between: 1) the degenerate vacua and 2) the vacuum state obtained by including the effects of tunneling. While the fundamental theory may provide some handle on the cosmological constant corresponding to the degenerate vacua, the observed vacuum energy could correspond to the real vacuum which incorporates the effect of tunneling. In that case it is the dynamics of tunneling which will determine the ground state energy and the cosmological constant.
End of quote
The author submits that a similar project, involving primordial relic black holes would be more effective as to the construction of degenerate vacua, as a way of making real the ideas Thanu Padmabhan [40] has the cosmological constant. We stress that this redo would be most helpful as to getting a bound upon the cosmological constant.
If also, we can gain entropy counts, as to the initial universe, non zero, this would be also a connection with [16] , a graviton production model, our choice of what a Primordial black hole should look like, and perhaps resolutions as to the nature of gravity itself. i.e. answering the questions Christian Corda raises in [37] , and [38] .
Note that in [41] , Valev has the relationship between a purported graviton mass, and wavelength as
(49)
i.e. the smaller the graviton mass is, the larger the wavelength is, which then puts the following constraint upon the cosmological constant
(50)
Taking into account what was said about LIGO, as to the range of purported frequencies, we repeat again the following
Quote
For our purposes, reference [36] has the following wavelengths of purported GW, i.e. from 10^12 Kilometers to an outsized 10^22 Kilometers in length, which yield staggeringly low GW frequencies, whereas a parsing of either the upper or lower bound of these values has to be rigorously ascertained, and this in turn will affect our rendition of Equation (11) above, profoundly.
End of quote
Keep in mind that inflation is usually ascertained as to have a definite number of e folds as to expansion, i.e. by convention, an e fold number of 60 or more, for a minimum expansion [42] , radially, of the “universe” from beginning to the end of inflation of the order of 10^27 times, whereas the total expansion from a Planck length to the present era is of the order of about 10^80, or more.
If the lower figure of radial expansion of 10^27 is assumed to be viable, and if there is quintessence, i.e. variation in the cosmological constant, perhaps due to temperature, initially, the effects upon expansion should be profound.
This among other things may lead to a phenomeological investigation of the mechanism of inflation.
Needless to say the implications, if this is examined, should be seriously considered. i.e. at a minimum, shrinking the wavelength from 10^12 Kilometers, or say 10^22 kilometers for
would have profound implications, which need to be ascertained, as if we are looking at relic conditions for gravitation.
Keep in mind, that there is a well developed theoretical construction for massive gravitons, and that what we are doing is for now dimensional analysis, but some people think that massive gravitons can be conflated with Dark matter [43] , whereas the author is more in favor of [44] , i.e. then our construction of relic mini black holes, which may be shedding gravitons initially, may be in fact a mechanism of creating DE, and aiding the expansion of the universe.
Acknowledgements
This work is supported in part by National Nature Science Foundation of China grant No. 11375279.