Several Ways to Calculate the Universal Gravitational Constant G Theoretically and Cubic Splines to Verify Its Measured Value

In 1686, Newton discovered the laws of gravitation [1] and predicted the universal gravitational constant . In 1798, with a torsion balance, Cavendish [2] measured Due to the low intensity of gravitation, it is difficult to obtain reliable results because they are disturbed by surrounding masses and environmental phenomena. Modern physics is unable to link G with other constants. However, in a 2019 article [3], with a new cosmological model, we showed that G seams related to other constants, and we obtained a theoretical value of . Here, we want to show that our theoretical value of G is the right one by interpreting measurements of G with the help of a new technique using cubic splines. We make the hypothesis that most G measurements are affected by an unknown systematic error which creates two main groups of data. We obtain a measured value of . Knowing that our theoretical value of G is in agreement with the measured value, we want to establish a direct link between G and as many other constants as possible to show, with 33 equations, that G is probably linked with most constants in the universe. These equations may be useful for astrophysicists who work in this domain. Since we have been able to link G with Hubble parameter H 0 (which evolve since its reverse gives the apparent age of the universe), we deduce that G is likely not truly constant. It’s value probably slowly varies in time and space. However, at our location in the universe and for a relatively short period, this parameter may seem constant.


Introduction
The universal gravitational constant G (also called Newton's gravitational constant) has a special character because it is considered to be one of the 3 most fundamental constants in physics since no model allows its value to be deduced from other known constants. Its value is used in Newton's equation [1] and that of Einstein's general relativity [4]. It is one of the least well-known constants despite all current technological means.
In Newton's equation of gravity [1], the attractive force F between two masses m 1 and m 2 , separated by a distance r, depends on G that acts as a coupling coefficient. 1 2 2 Gm m F r In Einstein's equation of general relativity [4], R μv is the Ricci curvature tensor, R is the scalar curvature, g μv is the metric tensor, Λ is the cosmological constant, G is Newton's gravitational constant, c is the speed of light in a vacuum, and T μv is the stress-energy tensor.
Several attempts to measure G have been made over time. Even if the recent measurements show small margins of error, they do not always overlap. As early as 1995, physicists suggested that certain measures of G may be tainted with systematic errors [5]. Our article will also show that G could evolve over time and not be a real constant.
With a new cosmological model, G is obtained (see Equation (31) further) as a function of the speed of light in a vacuum c, the fine-structure constant α, and the parameters of the electron (mass m e and classical radius r e ) [3]. To help the reader, we will summarize the theory that is behind this equation.
To validate the theoretical value of G found in the past, we wish, in a first step, to list the results of all the recent experiments aimed at measuring it. Using mathematical tools and software, the data will be processed to determine an estimate of G. It will be shown that there is a slight difference between the CODATA (Committee on Data for Science and Technology) value and the theoretical value of G, and we will explain why.
We will enumerate, in a second step, 33 different equations giving G. It is an exercise which is useful, among other things, to get certain equations which overcome the difficulties to do experimental measurements as well as to show that G is intimately linked to the other parameters of the universe.
In the third step, we will show that G is not constant and why it varies according to time as well as to the location in the universe where it is measured.

Summary of Our Theory
Our theory is based on a 2019 paper [3]. Our cosmological model is used to get all the equations. We will evocate some main milestones.

Our Cosmological Model
The universe is made of two 4-D expanding spheres, one imbricated in the other.
The smallest one is the "material universe" and the largest is the "luminous universe". At the big bang, there was only one sphere. After about 361,000 years, a lower density of the universe allowed the movements of the electrons. The universe became transparent and light appeared and began to travel through space, creating the sphere of the "luminous universe". However, the matter cannot travel as fast as light and it created a smaller sphere, the "material universe" that is imbricated in the "luminous universe".
Einstein found that the speed of light is slower near massive objects [7]. With the general relativity, Schwarzschild calculated the speed of light in the context of a weak gravitational field ( 2 c Φ ) around a spherical mass [8]. With his equation, we obtain Equation (3) that gives the speed of light v L as a function of c and a local refractive index n 0 . The value of n 0 is a function of the Newtonian gravitational potential Φ which is itself a function of G and the distance r from the center of a mass m.
For a distance r from the center of mass m, the speed of light c is reduced by the refractive index n 0 caused by a gravitational field of potential Φ, which gives a modified speed of light v L (r). Locally, in space and time, the speed of light seems constant and equal to c. However, Hubble found that the universe is expanding. Its density is decreasing, the refractive index of the vacuum is decreasing and all this causes light to be imperceptibly accelerated.
In Equation (3), c is the local speed limit for light in a vacuum. Admitting that light may slowly accelerate in a context of an expanding universe, it will eventually tend towards an asymptotical speed limit called k that is affected by a local refractive index n. With Equation (3) we build the analog Equation (4). We are currently at a distance r u from the center of the apparent mass m u of the universe. The actual speed of light c is the result of Equation (4).
The apparent radius of the curvature of the luminous universe is R u . It is "apparent" because the equation of R u assumes a constant speed of light c over time, for a time equal to the apparent age of the universe. However, in our model, the speed of light is not constant over time. It is c in the present moment, but since the universe is expanding, its value was lower in the past.
According to Carvalho [9], mass m u is given by Equation (5).
Hubble constant H 0 represents the relative rate of expansion in km·s −1 ·MParsec −1 of the visible universe [10]. It is equivalent to locally measuring the derivative of the velocity of matter v m with respect to distance r. Matter moves locally at a rate β times slower than the speed of light c by moving away radially from the center of mass of the universe. Locally, at our location in the universe, H 0 is evaluated at a distance r = r u (which represents a fraction β of R u ). The value of β will be calculated further at Equation (11).
The apparent radius of curvature R u of the universe [3] [11] (also called Hubble radius [12]) can be determined as a function of c and H 0 . For now, we are at a distance r u from the center of mass m u of the universe.
For a distance r u , our local universe parcel travels at speed v m .
The measurement of H 0 is made by observing the global displacement of galaxies at our location r u . Each galaxy has its own movement. Due to the expan-sion of the universe, the galaxies are moving away from each other. The value of H 0 represents the derivative of the speed of the material universe v m with respect to the element of distance dr, evaluated at r = r u .
The asymptotic value for the speed of light in a vacuum is represented by k when the apparent radius of curvature R u of the luminous universe tends towards infinity. The ratio between the speed of expansion of the material universe and the speed of expansion of the luminous universe (which is the speed of light) is the geometric value β. It can also represent the ratio of the apparent radius of curvature r u of the material universe (evaluated at our location in the universe with respect to the center of mass of the universe) and the apparent radius of curvature R u of the luminous universe. The value of m u represents the apparent mass of the universe. The value of β is unique to our cosmological model, but it is essential for making multiple connections between physics' constants. It allows making several links between the infinitely large and the infinitely small in Dirac's large numbers hypothesis [13] [14].
The precisions of m u , r u , and R u are directly dependent on the precision of H 0 . According to different sources, H 0 is between 67. 8(9) [15] and 4.8 4.3 77.6 + − km s −1 MParsec −1 [16]. Uncertainties do not always overlap. We must find a method that will give a minimum of unequivocal precision.
As the average temperature T of the CMB (Cosmological Microwave Background) can be precisely measured, an exploitable link may be made between T and H 0 . It suffices to theoretically calculate T as a function of H 0 . Its accuracy will now depend on G. For more details, please, see reference [3].
We obtain Equation (15) which gives the average CMB temperature T.

Dirac's Large Number Hypothesis
Dirac found that with ratios of quantities having the same dimensions, we get large numbers separated into a few distinct orders of magnitude. However, he could not calculate them precisely [13] [14]. All ratios that we found may, by adding certain factors, come from a single number N [3] that represents the maximum number of photons of lowest energy (of 2πR u wavelength) [3]. To get N, we calculate the mass m ph associated with a 2πR u wavelength photon by making its corpuscular and wave energy equal. 2.74 10 kg 2π 2π The large number N is obtained by dividing m u (from equation (5)) by the mass m ph associated with photons of 2πR u wavelength (Equation (17)).
If we try to make a precise calculation of N by using the Equations (5), (7), (16), and (17), we get Equation (19) which is mainly dependent on T. We evaluate the result by using the CODATA 2014 [6] and the CMB temperature from Cobra probe [17]. We note that N is dimensionless as α.
Assuming that α can be used as a scale factor applied a certain number of times, we postulate Equation (20). For now, it is impossible to get this equation from other equations of the standard physics.
In the following equations, Planck temperature is about T p ≈ 1.42 × 10 32 K.
This is the highest temperature that can be reached in the universe if we condensate the mass m u in a point-like sphere of radius equal to Planck length L p .
We also think this was the initial temperature of the universe at the Big Bang.
The value of q p is the Planck charge which is about q p ≈ 1.88 × 10 −18 C. "Large" numbers are of type N exponent a fractional number, such as N 1/2 , N 1/3 , N 1/4 , …, N 1/57 or N 2/3 , N 3/4 , etc. These are obtained in different ways by using different parameters of the universe [3]. Some come from Dirac's assumption on large numbers [13] [14]. Some links (used later) are recalled here [3].

Precise Calculations of G, H 0 , and T
To precisely calculate G, we need an equation that is independent of H 0 and T since we do not know them with sufficient precision. Most of the time G intervene in the calculations of gravitational energy and gravitational force. We will not show all details (refer to [3]), but let us consider an experiment where we evaluate the electrical energy E e between two electrons separated by a distance equal to the classical electron radius r e . The electrical energy E e is independent of the distance since we get E e = m e c 2 . We make another experiment to evaluate the gravitational energy for the same conditions and we find As in Equation (20), we found that the fine-structure constant α plays a role in determining orders of magnitude. By an adjustment of the exponent of the fine-structure constant α, we obtain a result identical to that of Equation (29).
We conclude that Equations (29) and ( [20] showed that the tolerances of many measurements of G do not overlap with each other. We associate the energy of the electron mass m e with the wave energy. With Equations (20), (31), and (32), we obtain Equation (33).
The result is similar to Equation (16) obtained with the Cobra probe [17] and confirms Salvatelli's value [18]
( ) It is in agreement with the value Cobra probe [17] with T ≈ 2.736 (17) K.
With a cosmological model and 2 postulates, we found theoretical equations that give G, H 0 , T, and N. With standard equations of physics, other equations may be found, such as Equations (21) to (28).

Experimental Measurements of the Universal Gravitational Constant G
In 1798, G was measured by Cavendish using a torsion balance [2]. During the following centuries, several different methods were used with ever more refinements to try to circumscribe the value of G. Despite recent technological advances and the precision achieved, the different results do not always overlap.
Our goal is to bring out a better value to estimate G.
The main purpose of this document is to show that the calculated theoretical value [3] of G (Equation (31)) can be validated by an adequate interpretation of the measured values thanks to a graph using cubic splines. The theoretical value of G is described as a function of the characteristics of the electron (the classical electron radius r e and mass m e ), the fine-structure constant α, the speed of light c as well as a constant β from a cosmological model. The value of G in Equation (31) becomes interesting for scientists if it can be verified by concrete measurements. A theoretical value obtained as a function of other more precise parameters makes it possible to overcome the difficulties inherent in the measurements of G. Despite all the efforts made, even the recent values of the measurements of G do not all agree with each other and have relatively high margins of error. So, we have to find a trick.
In Table 1, we have identified 32 relatively recent results (since 1930). Some of these values are statistical results of other measurements. We kept them to compare their value with the real measurements. List of Methods Used: STAT = Statistical results from various sources; TS = Time-of-swing; AAF = Angular acceleration feedback; BB = Beam balance; FDEC = Free deflection and electrostatic compensation (Servo); FPC = Fabry-Perot cavity with pendulums; AI = Atom interferometry; EC = Electrostatic compensation; TB = Torsion balance; *The values of 6.745 ± 0.005 × 10 −13 m 3 kg −1 s −2 (should be 10 −11 ) of the abstract and 6.745 ± 0.0008×10 −11 m 3 kg −1 s −2 from the Sagitov's text are erroneous [44] because the numbers after the dot and the tolerance are wrong. They may be corrected by averaging the table values from its original Russian article. After correction, To reduce the uncertainties, results from Table 1 inevitably comes from statistics (medians, averages, weighted averages, etc.) on several repeated experiments. Unfortunately, if the data is biased, an average of several measurements will only increase the repeatability of the measured means, always with the same bias. When the data are biased without knowing it and when several researchers display very small tolerances that do not overlap, they probably display their good repeatability, but not the actual tolerances.
We are well aware that in Table 1 some results are sometimes the results of measurements mixed with results from authors who preceded them. We confess that it was not always easy to keep only the real results of each author since they were sometimes "incorporated" in statistics that imply other measurements from other searchers. We did not want either to remove these values since they may harbor important information. We count on the fact that in some ways if these values are averages with other mixed data, they will not change the global average. Likewise, they should not have a noticeable effect on the position of the two peaks that we will look for. They may change the height of these peaks, but this information is not what we look for. Another effect of these statistics is the calculation of the tolerance value of G in Equation (37). However, it is easy to remove their effect by omitting them when we calculate the square root of the number of data at the denominator. If we remove all data which comes from statistics (10 in total), it remains 22 usable measurements for the calculation of the final tolerance.
If all the environmental variables had always been taken into account, the various experiments would probably all have led to the same result. Without doubting the incredible efforts that scientific teams have made to obtain the best possible results by reducing the different sources of error, it can be seen that several margins of error from different results do not overlap. This is an embarrassing situation which shows that some people may be wrong and some disturbing parameters are not always taken into account. Since the results come from completely different experiments, it is possible to believe that the parameters that were not taken into account are probably different from one experiment to another.
Of course, we may reduce the measurement error by averaging all the results, hoping that the systematic errors made in positive are equally compensated by systematic errors made in negative. By averaging the 32 data in Table 1, we calculate the following mean value G m : The median G md value is: On a very large number of samples, the median is assumed to equal the mean. Here, the difference is sufficiently marked between the mean and the median for us to suspect that the spreading of the data is not simply Gaussian. Consequently, we consider that despite the precisions displayed for certain values, it seems that these are tainted with errors. When thoroughness and precision are the order of the day, there are multiple sources of error, and some systematic causes can sneakily intrude on experiments and be overlooked. Some of these sources have been eliminated over time. For example, G measurements were once done in the air while they are now done in a vacuum to eliminate the effects of air agitation.
Mathematical tools do not always highlight the possibility that certain types of systematic errors can be committed, sometimes in positive, sometimes in nega- tive. An attempt to highlight the systematic errors can be made by scanning the abscissa of Figure 1. For each value of the abscissa, we look at the number of tolerance ranges which can be intercepted by a line perpendicular to this last. We thus construct a graph (see the graph at the top of Figure 1). This graph gives a tool to tell how many measurements overlap with certain values of G. The higher is the number on the ordinate, the higher are the chances to get overlaps with other measurements of G. In an ideal world, we would like to overlap with all the 32 measurements. However, we already know that it's not all measurements that are overlapping.
From a mathematic point of view, we have no control over where the node will be placed by the software. They are supposed to be placed in such a way it reduces the sum of squares of errors. Also, the shape of the spline is like a flexible ruler that we force to pass by 5 nodes chosen by the software. It is supposed to reconstruct a curve that minimizes the tension in the ruler.
This graph highlights that there are two predominant groups of data. There may be more, but visually, there seem to be two main ones.
The first group (G a ) seems to have fewer followers. The latest CODATA values (2014 and 2018) seem to be in the second group (G b ). However, 32 different experiments do not represent a very large sample. Having more data orbiting around the second group does not necessarily mean that the average value of the first group is not as valid, but it moves the mean value of G towards the second group G b .
We make the hypothesis that the second group G b is maybe biased by the fact that there are important publications of the G value every 4 years in the CODATA since its creation in 1966. Many measurement trials may have been abandoned or remade till experimenters get results that yield in the same ball game than the "official" values posted in the different versions of the CODATA. Since the same unknown systematic source of error may apply in + or in -on data, it seems reasonable to us to give as much importance to G a and G b .
A data set made of 32 experiments is not huge. We think that considering a larger unbiased sample, we could very well have as much data around the first group as around the second group. Of course, we can assume that rare experiences do to make any systematic error and their value would lie between the two predominant groups.
The graph at the top of Figure 1 is made up of rough "jumps". To smooth this graph, we pass a cubic spline function through the data set.  Cubic splines have interesting properties. They are made of a set of control points that are called "nodes". There can be as many nodes as desired as long as there are at least three nodes to get two segments. Between two nodes, the curve is made of 3-degree polynomial segments that allow flexible and polyvalent twists. The most complex and continuous curves may be approximated with splines if they are decomposed in enough segments of small size. To ensure the smoothness of the curves at each node, adjacent segments have the same first and second-degree derivatives. In Figure 1, if there were an infinite number of data, the curve would be continuous with no discontinuities. This is the main reason why we use cubic splines to reconstruct an ideal smooth curve. We mention that other types of smoothing curves could have been used. A simple 5-degree polynomial would also allow us to show 2 bumps. However, it would C. Mercier tend to have uncontrolled behaviors at each end and would not fit well on the type of curve we have.
We will force the cubic splines curve to reveal only two bumps, whatever their heights and wherever they are. To achieve this goal, we set only 5 nodes (2 for the ends, 2 for the peaks, and one for the hallow between peaks). The position of the nodes will be free to move to reduce the least-square error.
To eliminate any subjectivity on the position of the nodes, we rely on a curve generated automatically by the software. Such software in Delphi 3.0 (advance Pascal) is available at Annex A.
In this software, a list is made from the 32 values of G and their corresponding tolerances in Table 1 The nodes on the cubic spline curve can have any ± real values. Negative values may be required to construct the cubic splines curve at its ends. They have no converse in the physical world. However, when a value is negative, the probabilities of crossing an error range are almost zero.
By iterating to find the best positions for the 5 nodes, the software will reduce the sum Σ e of the least-squares of the errors between the values n i and the values from the cubic splines curve.
When it seems no longer possible to iterate, the value of G is obtained by finding the ideal value for the measurement of G which is right between G a and G b since we assumed that the systematic error is the same in + or in -. This corresponds to averaging G a and G b . These values correspond to the values of G for the two cubic spline curve peaks and should give the following e max tolerance.
The values e i correspond to the tolerances associated with the data in Table 1. The number of data in Table 1 corresponds to i max = 32. From these data, we remove all statistical results to get a total of n = 22 real measurements. Because The sum of the least-squares found with these values is as follows.

202.827574
Once the 5 points are found and it is no longer possible to iterate to reduce the error, we find the 2 values of G which correspond to the two peak values G a and G b of the cubic spline curve (see Figure 1).
Since we want to give as much weight to the first group (corresponding to the peak reached in G a ) as to the second (corresponding to the peak reached in G b ), we average the values of G between the two peaks. By averaging these 2 values, we get the following value of G (round up on the sixth decimal): ( ) The tolerance of ±0.000050 × 10 −11 m 3 ·s −2 ·kg −1 comes from Equation (37).
However, the measured value of G in Equation (42) is now well centered on the theoretical value of G (Equation (31)) within ±5 ppm (parts per million). We would like to draw attention to the fact that the second peak value G b of the cubic spline curve is close to the value presented by CODATA 2018, but especially to the value presented by CODATA 2014 (referring to Table 1). The process using the cubic spline curve seems to show that the measurements from recent years tend to be around the second peak value (G b ) of the cubic spline curve. A lot of experiences probably commit a systematic error. Even if it seems likely that there is a systematic error, it is not possible, for the moment, to know what this error is.
Since we want to give as much importance to the first group as to the second, we average the two peak values (G a and G b ). This value that comes from measurements is roughly the same as the theoretical G in Equation (31).
Although our sample of 32 data stemming from different experiences is quite limited, it seems that this allows us to highlight the possible existence of a systematic error. Unless there are major advances, the source of this error may remain unknown since it differs from one experience to the next. However, we think that the value of the error is sometimes imputed to the data in positive and sometimes negative. The values then vary around an average value which should be the theoretical value of G of Equation (31).
The experiments mentioned in Table 1 were carried out using different methods. Some involve different materials, masses of different values, and involve different distances. At a very short distance, the Casimir effect, electrostatic, and magnetic forces may come into play and distort the results if they are not taken into account. Most of the experiments are now carried out under vacuum and at a controlled temperature to avoid disturbances due to the agitation of the air. When trying to get very high accuracy, even vehicles on the street, tides, the moon, and the sun may affect the results.
It is really difficult to subtract laboratories from all perturbating sources. Not knowing all the details of the montages and environments used for each of the experiments and knowing that each montage has different sources of error, it is very difficult to point to a specific source of systematic error. It may never be possible to find it. However, the difference between the value of one of the two peak values of the cubic spline curve and the central value gives a good idea of the magnitude of the systematic error. This value could help a research team to find the source of this error.
For now, G appears to be constant. But is it? We will analyze this point in detail once we will have stated the different equations of G.

A Reminder of Different Useful Identities
Currently, our metrology system considers G as one of the 3 fundamental constants of physics. Based on modern physics, no model makes it possible to obtain G as a function of the other constants. However, recently [3], thanks to a new cosmological model, we have shown that G can be obtained as a function of the speed of light in a vacuum c, the fine-structure constant α, and the parameters of the electron (mass m e and classic radius r e ).
To avoid repeating everything unnecessarily, we wish here to recall different identities which will be used later to determine several equations of G.
Let's start by listing the different Planck units that will be useful to us.
Planck mass m p is determined, as a standard, as a function of Planck constant h, G, and the speed of light in a vacuum c.
Planck time t p is determined, as a standard, as a function of Planck constant h, the universal gravitational constant G and the speed of light in a vacuum c. It can also be determined according to Planck length L p . 44 5 5.91 10 s 2π Planck length L p is determined, in a standard way, from the same parameters or from Planck time t p with the following equations.
Planck charge q p is determined, in a standard way, as a function of the preceding parameters, the fine-structure constant α and the vacuum permeability μ 0 .
It can also be determined as a function of the vacuum permittivity ε 0 or from the electrical charge q e of the electron.
Note that unlike most Planck units, Planck charge is not defined, in a standard way, using G and h. However, using Equations (31), (46), (50), and (51), we can establish a relation of Planck charge q p as a function of G and h.
The fine-structure constant α is linked to the Rydberg constant R ∞ and to the mass of the electron m e by the following equation: The charge of the electron is determined based on the mass of the electron m e , the classical electron radius r e , and the permeability of the vacuum μ 0 .
Thanks to the wave-particle duality, it is possible to link the energy of the mass m e of an electron (left part of Equation (50)) to the energy of the wave associated with it (right part from Equation (50)).
The speed of light c in a vacuum is given as a function of the vacuum permeability μ 0 as well as the vacuum permittivity ε 0 .

Different Equations for Calculating the Universal Gravitational Constant G
To show the interdependence of G with the other parameters of the universe, we will make an enumeration of equations using different "constants". Some of these equations will offer the advantage of overcoming the difficulties inherent in measuring the value of G and will present a roundabout way of obtaining a precise value of it. The use of different parameters could highlight that the universal gravitational "constant" G may not be that constant. Equation (31) defines G as a function of the speed of light c, the fine-structure constant α, the parameters of the electron (mass m e and its classical radius r e ) as well as the constant β defined in Equation (31). This last constant comes from a cosmological model that shows the existence of a material and luminous universe. These two spherical universes evolve one in the other in a β relationship. Without this constant, the equations which make it possible to define G only based on parameters with good precision (8 to 11 digits after the point) would not be possible. Most of the equations that follow will use some of the constants used in Equation (31). We will, therefore, focus on the other constants that will appear in the different equations.
From Equations (31) as well as from Equation (48), G can be defined as a function of Planck constant h and the Rydberg constant R ∞ .
Equations (31), and (50) make it possible to define G as a function of h. 2 3 19 2π e r c G h α β = (53) Using Equations (31), and (48) in Equation (53), G is defined as a function of Planck constant h and Rydberg constant R ∞ .
Using Equations (2) and (16) in Equation (35), G is defined by another equation as a function of Rydberg constant R ∞ .
Using Equations (31), (46), and (51), G is obtained as a function of the vacuum permeability μ 0 as well as the electron charge q e .
Using Equations (31) and (51) in Equation (58), G is defined as a function of the vacuum permittivity ε 0 and the same parameters as Equation (58). Using Equations (20) and (22), we obtain an equation of G defined according to the charge of the electron q e to the numerator as well as according to the permittivity of the vacuum ε 0 and the mass m e of the electron. This equation can also be taken from Equation (28). Using Equation (51) in Equation (60), we obtain a similar equation, but as a function of the speed c and the vacuum permeability μ 0 .
We emphasize that Equations (31), and (52) to (61) make it possible to obtain a precise value of the gravitational constant G by overcoming the difficulties inherent in the measurements of this parameter of physics.
Using Equations (46) and (60), we get G as a function of Planck charge q p . Using Equation (51) in Equation (62), we obtain a similar equation, but as a function of the speed of light c and the permeability of vacuum μ 0 . Using Equations (46) and (48) in Equation (58), we obtain G as a function of the vacuum permeability μ 0 and Planck charge q p . Using Equation (51) in Equation (64), we obtain G as a function of the vacuum permeability μ 0 , the vacuum permittivity ε 0 , and Planck charge q p . Using Equation (51) in Equation (65), we obtain an equation as a function of μ 0 , ε 0 , q p , and Rydberg constant R ∞ .
Using Equation (51) in Equation (66), we obtain the following equation as a function of μ 0 , ε 0 , q p , and R ∞ .
Using Equations (31), (50), (34), and (33), it is possible to determine G as a function of the measurement of the average CMB temperature T. Because of the uncertainty currently hanging over this parameter, the result of this equation is much less precise. However, using the value of T presented in Equation (34), we obtain a result almost as precise as those from Equations (31) and (52) to (59).
The difference in precision then comes from the Boltzmann constant k b which is slightly less precise than most of the other constants used.
Using Equation (50) in Equation (68), we obtain another equation giving G as a function of T, k b as well as Planck constant h.
Using Equation (49) in Equation (68), we obtain an equation giving G as a function of the mean temperature of the cosmic microwave background of the universe T, Boltzmann constant k b , and the electron charge q e .
By raising Equation (34) to power 4, by isolating from this equation the parameters required to equal Equation (31), and by replacing these parameters by G, we obtain an equation of G as a function of the mean temperature T of the cosmic microwave background and Boltzmann constant k b .
The following equations cannot be used as tools to calculate G with precision. Using Equation (5), G can be defined as a function of the apparent mass of the m u universe and Hubble constant H 0 .
Using Equation (7) in Equation (72), we get G as a function of the apparent radius of curvature R u of the universe as well as m u .
Using Equations (31), (33), and (73), G may be defined as a function of R u .
Using Equations (5) and (26), it is possible to obtain G as a function of Rydberg constant R ∞ and the apparent mass m u of the universe.
Using Equations (5), (7), and (27), we obtain G as a function of Planck length C. Mercier L p , the apparent mass m u of the universe, and Rydberg constant R ∞ .
Using Equations (43), (44), and (45), G can be defined according to the 3 main Planck units, i.e. Planck length L p , Planck mass m p and Planck time t p .
Using Equations (43), (45), (5), and (7), the value of G can be obtained according to R u , m u , Planck length L p and Planck time t p .
Using Equations (5), (18), and (21), we get an equation with the mass m ph associated with the photon having the lowest energy level as well as H 0 .
Using Equations (5) and (24), it is possible to obtain G as a function of the mean temperature T of the cosmological microwave background, of Boltzmann constant k b , of the mass associated with the least energetic photon m ph , the apparent mass m u of the universe and Hubble constant H 0 .
Using Equations (5), (20), and (25) we obtain an equation of G which involves the Planck temperature T p , the Boltzmann constant k b , the Planck constant h, the apparent mass m u of the universe and the Hubble constant H 0 .
Using Equations (20), (21), and (80), we obtain an equation of G which involves the mass m ph and Planck time t p .
We have presented several different Equations (33 in total including equation (31)) that can define the gravitational constant G as a function of different parameters of the universe. Several other equations could probably be found by making other combinations of equations.
The idea conveyed in the CODATA 2014 [6] was that G "is independent from other constants". We aimed to show that it may not the case. The proper links are not commonly part of the standard physics yet. With all the equations of G present in this article, we suggest that some of the most important parameters of the universe are intimately linked, as much in the infinitely large as in the infi-nitely small and the gravitational constant G is probably part of them.

Why Is G Not a Real Constant?
We want to explain why G cannot be constant over time. At the same time, we will show why, apart from the fine-structure constant α, most of the parameters of the universe are probably not constant.
Equation (75) shows a direct link between G and H 0 . Since 1/H 0 gives the apparent age of the universe, H 0 is not constant even if it may look to observers for a short period. This shows that G is probably also evolving over time.
The constant α is one of the rare parameters in physics to be truly constant. This is because it's a ratio of two numbers having the same units. For example, α can be defined as the ratio between the classical electron radius r e and Compton radius r c of the electron. It can also be defined as a function of the electron charge q e and Planck charge q p .
Several other ratios involving parameters with different unities could define α. Take, for example, the ratio of r e by r c . If any phenomenon influences the numerator value by an infinitely small factor δ, the same phenomenon will also influence the denominator in the same proportions with a factor δ.
But what happens when we consider c constant without it being? Let us examine another equation that defines α as a function of c.
If for metrological reasons, the value of c is kept constant with its current value, the value of α will then seem to be divided by the factor (1 + δ). However, assuming that the speed of light is indeed increasing over time, (1 + δ) > 1. Therefore, the value of α would seem to decrease over time. It is moreover at this conclusion that Wilczynska's research team [48] arrived. His team made 4 direct measurements of what should be α at universe's creation, about 13 billion years ago. According to this team, Δα/α ≈−2. 18 ± 7.27 × 10 −5 . According to Einstein's 1917 cosmology, the universe can be associated to a 4-D sphere [49]. Contrary to the preconceived idea of a static universe he had at the time, Hubble showed in 1929 that the universe is also in expansion [10]. The apparent radius of curvature R u continues to grow and we move away from the center of mass of the universe [3]. By moving away from it, the density of the universe decreases over time, and the refractive index of the vacuum decreases, which allows a slight increase in the speed of light in a vacuum [3]. Currently, the latter is defined as c. But it increases very slowly over time. Its variation over a year is so small that it is currently not measurable by modern instruments, even over several decades.
In 1905, in his theory of relativity, Einstein postulated that the speed of light c in a vacuum was constant [50]. This assumption was based on intuition, but not on facts. At the time, c was known with 5 or 6 significant digits [51], and the measured value at that time was different from year to year.
Nowadays, the main instrument for measuring the speed of light is the laser.
The ancestor of the laser, the maser (a kind of laser working in the mid-microwave), was invented in December 1953 by Charles H. Townes and demonstrated its operation in 1954 [52]. A few years later, in 1958 the concept of the first laser was created [52]. The laser was, for the first time, used as a measuring instrument for the speed of light in 1972 by the Evenson team [53] which measured a speed c ≈ 299792456.2 ± 1.1 m·s −1 . In 1973, the International Bureau of Weights and Measures recommended the use of c ≈ 299792458 ± 1 m·s −1 as value for the speed of light in a vacuum, then, in 1975, a resolution was adopted so that the value of the speed of light is considered exact with a value of c = 299792458 m·s −1 [54]. This crucial step defines the speed of light as an immutable standard. The researchers' task then summed to try to use this standard to define the most possible constants. In 1983, the meter was redefined by the International Bureau of Weights and Measures as the distance traveled by light for 1/299792458 second, which implies that the value of c is now considered to be exact [54].
In metrology, the constancy of the speed of light c is now an essential tool for calibration, because this speed is used as a reference for several other parameters of the universe. By deliberately keeping c "constant", even if it increases over time, this has the effect of giving the impression that most of the parameters in physics are constant. This is especially true for parameters that have units of measurement. One has to realize that this whole concept is a device put in place to facilitate metrology.
Metrology with a speed of light c which increases overtime is not practical and is rather undesirable. When the desire is to understand the evolution of the universe, physicists should not allow themselves to impose the constancy of the speed of light as it is done in metrology, otherwise, it risks creating inexplicable phenomena. In the days of Ptolemy's anthropocentric beliefs, mankind believed that the Earth was the immobile center of the universe and that all stars revolve around it. It was an impossible mission to create universal mathematical equations to correctly describe the movement of all-stars in the universe. However, the understanding of our universe has been greatly simplified following Copernicus' discoveries of heliocentrism from 1511 to 1513 when he wrote: "De Hypothesibus Motuum Coelestium a se Contitutis Commentariolus" (known as "Commentariolus") [55]. Similarly, the imposition of the constancy of the speed of light in a vacuum could deprive humanity of a cosmological model that is easier to understand.
Many would argue that the speed of light c, the classic electron radius r e , Rydberg constant R ∞ , and several other parameters are "constant". However, knowing that the universe is expanding [10], no one will doubt that the temperature of the CMB decreases over time. In an expanding universe, the energy density is necessarily decreasing, which requires a reduction of the temperature T over time.
At the time of its formation, due to the ionization of gases, the average temperature of the universe did not follow the curve of a black body [56]. However, because the universe, as a whole, is homogeneous and isotropic and that its global cooling has led its average temperature to around 2.736 (17) K [17], the universe can now be seen as a black body [57] which radiates its energy and cools. Equations (68) to (71), and (81) to (82) clearly show a direct link between G and the CMB temperature, which necessarily implies that G is not a constant over time.
Indirectly, this also shows that several other parameters of the universe are not constant since G was defined using these same parameters in the other equations. For G to vary over time, some of these other parameters must also evolve at the same time.
For now, the total energy E u contained in the universe is a function of c 2 .
However, as stated at the beginning of this article, meanwhile the universe is expanding, the light slowly accelerates. As there must be conservation of the energy E u , the apparent mass m u of the universe slowly decreases. Because of Equation (18), we know that there is a maximum number of photons of wavelength 2πR u in the universe. Of course, the universe is full of other photons with other wavelengths, but if they were all of the same with a 2πR u wavelength, there would be N ≈ 6.3 × 10 121 photons. With Equations (7), (18), and (21)

Conclusions
This article had 3 goals. The first one was to show that the theoretical value of G C. Mercier from Equation (31) is the right one, even though in recent years, CODATA has shown a slightly higher result. The second goal was to get other equations that give G as a function of the different parameters of the universe and thus demonstrate that G is intimately linked to them. Third, using these equations, we wanted to show that G is not constant in time and space.
To begin with, a graph was constructed by analyzing the number of ranges of G measurements intercepted (using 32 measurements collected since 1930) for each potential value of this parameter. The use of cubic splines made it possible to highlight two groups of data. By trying to reduce the least-squares differences between the cubic spline curve and the measured values of the graph, it is possible to precisely establish the center of the two measurement groups. A very good estimate of the theoretical value of G is obtained by averaging the values of G corresponding to the two peaks, which corresponds approximately to the proposed theoretical value of Equation (31). The idea of using cubic splines to bring out more precisely two groups of measures instead of using statistical means could be reused in the same way with Hubble constant H 0 because there seem to be two groups of data [58] and the theoretical value published in Journal of Modern Physics [3] (see Equation (33)) lies somewhere between these two groups. To show the interdependence of G with the other parameters of the universe, we have listed several equations that make it possible to calculate G. Certain equations make it possible to calculate G in a purely theoretical way using parameters considered precise. These equations perhaps represent tools that will allow us to track down the different sources of errors in the measurements of G. The different parameters used in these equations also allow us to question the constancy of G since the average temperature T of the CMB, and H 0 are not constant because of the expanding universe.
This article highlights the fact that there are two ways to use the speed of light c. In metrology, this parameter is deliberately imposed as being constant to be used as a reproducible standard. But this point of view does not seem useful when the goal is to understand the evolution of the universe. Imposing the constancy of c could lead some to conclude that there are inconsistencies in the evolution of certain parameters. On the scale of the universe, a few tens or hundreds of years of metrology represent no more than an instant photograph of the universe parameters taken during a scene that takes place over several billion years.