_{1}

^{*}

The practical value of high-precision models of the studied physical phenomena and technological processes is a decisive factor in science and technology. Currently, numerous methods and criteria for optimizing models have been proposed. However, the classification of measurement uncertainties due to the number of variables taken into account and their qualitative choice is still not given sufficient attention. The goal is to develop a new criterion suitable for any groups of experimental data obtained as a result of applying various measurement methods. Using the “information-theoretic method”, we propose two procedures for analyzing experimental results using a quantitative indicator to calculate the relative uncertainty of the measurement model, which, in turn, determines the legitimacy of the declared value of a physical constant. The presented procedure is used to analyze the results of measurements of the Boltzmann constant, Planck constant, Hubble constant and gravitational constant.

Any modern scientific research is based on physical laws in which there are numerical constants having specific and universally used symbols. Their exact values are necessary to make reliable, verifiable forecasts about the structure of the world around us. Second, checking their numerical values through complex experiments allows us to identify the consistency and acceptability of a particular physical theory. These quantities are called physical constants.

When scientists take measurements or calculate some physical constant based on their data, they usually indicate the range of values within which this “true value” is located with a given probability. The result is not only a number, but also a number with a measurement uncertainty [

As an example, we can draw attention to existing statistical methods used to estimate the Hubble constant. As shown in [_{0}, are statistically inconsistent and systematically too low. In particular, one of the methods used to calculate the Hubble constant is based on the study of type Ia supernovae using a remote ladder. Supernovae are known as “standard candles” because they produce constant peak brightness values. Because the magnitude of the observed brightness depends on the distance from the supernova, this can be used to measure its distance from the Earth. The process of measuring distance is very complicated. It is based on the calibration of the distance ladder, which leads to significant uncertainty, both systematic and random. The systematic error is the larger of the two, and depending on whether the error is positive or negative, the Hubble constant is underestimated or overestimated, respectively. The random part of the error leads to the fact that some objects at measured distances are too large, while others are too small. Contrary to what one might think, these errors on average do not cancel but lead to a systematically too low a value of H_{0} [

More specifically, various statistical methods are used to estimate the Hubble constant, including weighted regression analysis and Bayesian analysis, which allow you to include other available data sources. These methods are a continuation of the usual least-squares model, in which speed regresses at a distance and where the estimate is found by minimizing the standard error. Common to the three methods—least-squares model, weighted regression, and Bayesian analysis—is that the error in the estimation does not disappear and does not even decrease when there are more speed/distance measurements [

Another example closed to applying statistical methods for verifying a magnitude of the physical constant is a realization of the International System of Units (SI). One of the outstanding scientific achievements of the 21^{st} century is the approval of a new version of SI [

In the СODATA (the Committee on Data for Science and Technology) procedure, the selected experimental results of measurements of the physical constants are combined with their individual relative and standard errors in the least-squares adjustments (LSA) procedure. However, when the data and the model are incompatible, considering the indicated uncertainties, this procedure does not give adequate results [

The main target to use LSA is to fit models to measurements that are accompanied by quoted uncertainties. The weights are chosen in dependence on these uncertainties. The advantage of LSA is that its score corresponds to a decision on maximum probability. This allows us to obtain guarantees for estimating the maximum probability (consistency, asymptotic normality). This then allows us to build hypothesis tests and obtain confidence intervals for the estimated regression coefficients.

A distinctive feature of the LSA method is that it is aimed at checking the consistency of the results, and for this, the initial experimental values are “adjusted,” that is, changed to optimize the final dispersion of the set. In the case of conflicting results, the associated uncertainties are increased in the CODATA analysis [

In this case, there is presented a biased statistical expert, motivated by personal convictions or preferences [

However, one should not underestimate the significant efforts of scientists to avoid the above effects. The fact is that the determination of each physical constant using a special CODATA adjustment usually includes the results of measurements of various independent research groups working on the problem of measuring the physical constant for decades. The goal of the coordinated efforts of scientists was to guarantee a situation where systematic effects were not missed.

To summarize the above, it is necessary to pay attention to one important feature inherent in all methods of analysis of experimental data and uncertainties in the measurement of physical constants. Systematic uncertainties arising from the idealization of modeling and due to philosophical and scientific preferences of researchers are completely ignored. In other words, the choice of the model of the measuring process is subjective in nature, depending on the consciousness of the researcher with his preferences in choosing a quantitative and qualitative set of variables taken into account. This fact complicates the already complex process of checking the model by creating an uncertain target—a situation in which neither the simulated nor the observed behavior of the system is precisely known.

Therefore, when we talk about the level of accuracy of measuring physical constants, we must understand that modern measuring models, test benches, and calculation algorithms have become a very powerful and accurate tool since 2010 [

The fact is that some uncertainties in the experimental results are due to the philosophy of researchers. They either report unjustifiably large errors so that they are not blamed for a wrong approach, or underestimate errors, unconsciously wanting to present the best result (the author is far from suspecting research teams of a scientific adjustment of facts). That is life. Therefore, a method is needed that excludes the subjective component of the measurement process.

We show that with the help of concepts and the mathematical apparatus of information theory, it is possible, theoretically and without any additional assumptions and simplifications, to calculate the amount of information contained in the measurement model of the physical constant. This circumstance allows us to establish the value of relative uncertainty, which, in turn, determines the legitimacy of the declared value of the physical constant. We also present specific examples of the application of the described information approach. The presented procedure for calculating relative uncertainty is used to analyze the results of measurements of the Boltzmann constant, Planck constant, Hubble constant, and gravitational constant.

Analysis of publications and all necessary calculations were carried out in the office of Mechanical & Refrigeration Consultation expert (Beer-Sheba, Israel).

It may seem strange that before starting the experiment you need to list all the base quantities used and the total number of variables considered in the model. Moreover, this important point is completely ignored in the canonical CODATA method for calculating the target value of the physical constant and its relative uncertainty. The need for this requirement is explained as follows.

The fact is that any measurement of a variable by itself implies the presence of an already formulated model. As mentioned in [_{SI} ≡ LMT). Measuring the Boltzmann constant is usually realized by CoP_{SI} ≡ LMТθF or CoP_{SI} ≡ LMТθI. It should be noted that SI is a product of human thinking and does not exist in nature. At the same time, SI is used in science and technology in accordance with the developed consensus [

Refining the process of formulating the model from the perspective of choosing a specific CoP may offer a new interpretation of the results of measurements of physical constants, which will be discussed later in the article.

In addition, it should be noted that a researcher, choosing a specific CoP, in practice discards possible potential hidden relationships between the variables considered in the model and the ignored variables. Thus, of course, this can affect the accuracy of the proposed model and even lead to an increase in its uncertainty. This is explained by the fact that, although in the opinion of a significant part of the scientific community, the model error can be reduced by using a large number of variables thanks to improved algorithms and supercomputers, each variable introduces its own uncertainty into the total integral error that affects the desired result. However, as the dimension of the model increases, only the reliability of the model results improves [

To assess the magnitude of the threshold mismatch [

In science and technology, a wide variety of unit systems can be used that are most suitable for a particular application, for example, Imperial and US customary units or Natural units [

It can be proved using the concepts and mathematical apparatus of the theory of similarity [

μ SI = 38265 . (1)

μ_{SI} cannot be simultaneously taken into account in the model. Typically, a researcher uses 10, 20, or even 130 variables with CoP_{SI} ≡ LMTθF [

For further reasoning, we indicate that information entropy [

We will use an analogy with a theory of signals transmission. Imagine that the observed measurement process has a huge number of properties (quantities, criteria) that characterize its content and interaction with the environment. Then, we assume that each dimensionless complex represents the original readout (reading [

Then, let there be a situation where all µ_{SI} of SI values can be taken into account, provided that the choice of these quantities is a priori considered equally probable. In this case, we are guided by the idea of Brillouin connecting amount of information obtained in the simulation (observation) without making any disturbance in the measurement process, and the uncertainty inherent in the selected model [

Comparing the number of variables in SI with the selected number of variables in a particular model, it turns out that you can calculate the amount of information ΔA_{e} contained in it [

Δ A e = κ ⋅ ln [ μ SI / ( z ″ − β ″ ) ] (2)

where ΔA_{e} is expressed in units of entropy, μ_{SI} includes dimensionless criteria/variables that are considered equally probable when selected by the researcher in the model, z ″ and β ″ are the number of all and base quantities registered in the chosen model, respectively, γ = z ″ − β ″ , k is the Boltzmann constant.

Obviously, in practice, researchers can use dimensionless criteria that are not included in μ_{SI}. It is easy to show that a value of μ_{2} (number of dimensionless criteria and numbers in an extended system of units numbered “2”) does not dramatically influence on a final result. Let us suppose that 2 μ SI = μ 2 . Taking into account that ln μ SI ≫ ln ( z ″ − β ″ ) SI , ln μ 2 ≫ ln ( z ″ − β ″ ) 2 , and ln μ SI ≫ ln 2 , we can obtain the following relation

Δ A eSI / Δ A e 2 = [ ln μ SI − ln ( z ″ − β ″ ) SI ] / [ ln μ 2 − ln ( z ″ − β ″ ) 2 ] = ln μ SI / [ ln 2 + ln μ SI ] ≈ 1. (3)

The physical content of Equation (2) is very important. For example, two research groups analyze the process of measuring a physical constant. The results are different from each other. Who presented the most respectable option? Obviously, the choice of the class of the phenomenon and the number of variables considered will affect the information content of the model and will cause a different amount of information contained in it [

Δ A b 1 γ = ( ln ( γ CoP / γ 1 ) ) / ln 2 ( bits ) , (4)

Δ A b 2 γ = ( ln ( γ CoP / γ 2 ) ) / ln 2 ( bits ) , (5)

where ΔA_{b1γ} and ΔA_{b2γ} are an amount of information of the model formulated by the first research team and the second team, respectively, compared with the model that takes into account the optimal number of dimensionless criteria γ_{CoP} inherent to a particular CoP; γ_{1} and γ_{2} are the number of dimensionless criteria in the first and second models, respectively.

Let us suppose that γ 1 < γ CoP < γ 2 and | γ 1 − γ CoP | < | γ CoP − γ 2 | . By analyzing Equations (4) and (5), some readers may suggest that it is preferable to use a model with a large number of variables when modeling a physical process. However, this is a wrong conclusion, and here is why. By comparing ΔA_{b1γ} and ΔA_{b2γ} in absolute terms, the researcher can “instantly” determine which one is smaller. This means that the number of dimensionless criteria considered is closer to the optimal one γ_{CoP} corresponding to the minimum comparative uncertainty [

It is also advisable to emphasize the importance of introducing the concept of “information content” of the model, ΔA, from the point of view of choosing a specific model of the measurement process.

First, information content can provide a natural explanation for the preferred choice of a particular measurement method. Until now, it was almost impossible to recommend scientists to focus their efforts on a specific method. However, with the introduction of the concept of information content (3)-(5), it is quite possible to state which of the models describing the same method of measuring the physical constant is most preferable.

Secondly, the content of the information may mean that some models of measuring the physical constant are less preferable. Specific examples and detailed explanations are presented in Section 3.

Third, the information content implies that many models are unsuitable for measuring a particular physical constant. In these models, the number of variables does not correspond to the recommended number inherent in the selected class of phenomena. The accuracy of the model, usually associated with the number of variables considered, is seen in a different light when implementing the information approach (Section 3).

Fourth, an accurate description of the experimental setup in terms of an information approach requires some knowledge of the future. We know very well that the experiment itself never allows the experimenter to look into the future, but if we try to interpret what is happening, some expectation of the future experiment seems necessary. We suspect that this approach may allow us to reflect a state where some hidden variables that can influence the result are not considered by the researcher’s conscious decision (Section 3).

The amount of information contained in the model (Equation (2)) is only a sufficient condition for choosing the preferred option. In addition to this, we can formulate a necessary condition. Using Equation (2), we can get an expression for calculating the absolute uncertainty of the model Δ_{pmm} [

Δ pmm / S = ( z ′ − β ′ ) / μ SI + ( z ″ − β ″ ) / ( z ′ − β ′ ) , (6)

where S is the interval in which the dimensionless quantity u is located, z¢ and β¢ are the total number of dimensional quantities and the number of base quantities in the CoP, respectively, ε = Δ_{pmm}/S is the comparative uncertainty [

Four features of Equation (6), called the µ-rule, should be noted. First of all, this equation is applicable both to models with dimensional variables and with dimensionless variables, due to the following relations:

Δ U / S * = ( Δ U / a ) / ( S * / a ) = Δ u / S r / R = ( Δ U / U ) / ( Δ u / u ) = ( Δ U / U ) / [ ( Δ U / a ) / ( U / a ) ] = 1 (7)

where S and Δu are the dimensionless quantities, respectively, the range of variations and the total absolute uncertainty in determining the dimensionless quantity u; S^{*} and ΔU are the dimensional quantities, respectively, the range of variations and the total absolute uncertainty in determining the dimensional quantity U; a is the dimensional scale parameter with the same dimension as that of U and S^{*}; r is the relative uncertainty of the dimensional quantity U; and R is the relative uncertainty of the dimensionless quantity u.

Secondly, Equation (6) is a kind of correspondence principle for the model development process and can be related to the Heisenberg principle. During measuring a physical constant, the model must satisfy Equation (6). In other words, changing the level of detailed description of the test bench by choosing the class of the phenomenon ( z ′ − β ′ ) and the specific number of variables to be taken into account ( z ″ − β ″ ) causes a change in the smallest value of the comparative uncertainty Δ_{pmm}/S of the main studied function (main variable). Thus, the correspondence principle uniquely determines the achievable accuracy limit (for a given class of phenomena), while simultaneously revealing a pair of quantities observed by a conscious researcher, in particular, absolute uncertainty Δ_{pmm} in the measurement of the studied quantity and the interval of its change S.

Third, Equation (6) has the property of equivalence. This means that it is true for other measurement systems. Models formulated in other systems of units of measure, for example, in yards and pounds or centimeter-gram-second (CGS), will also have to comply with Equation (6) to maintain the basic relationships between physical variables. Equivalence ensures that physical models of reality remain consistent, regardless of units.

Fourth, the development of measuring equipment, improving the accuracy of measuring instruments, and improving existing and newly created measurement methods in the aggregate lead to an increase in knowledge about the object under study and, therefore, the value of achievable relative uncertainty decreases. However, this process is not infinite and limited by Equation (6). The reader should keep in mind that this principle is not a lack of measuring equipment or an engineering device, but the way the human brain works. Predicting the behavior of any physical process, physicists actually predict a tangible yield of instrumentation. It is true that, according to the µ-rule, observation is not a measurement but a process that creates a unique physical world in relation to each specific observer.

In addition, using Equation (6), one can find the necessary conditions for approaching the smallest relative uncertainty of each CoP, r_{CoP}, the fulfillment of which can confirm the legitimacy of the declared measured value of the physical constant. For this, it is necessary to take the derivative of Δ_{pmm}/S with respect to z ′ − β ′ and equate it to zero:

( z ″ − β ″ ) = ( z ′ − β ′ ) 2 / μ SI (8)

For example, for the thermal–electromechanical process (CoP_{SI} ≡ LMТθI), which is used in measuring the Boltzmann constant, it is necessary to consider the following statement. The dimension of any derived quantity q can be expressed as a unique combination of dimensions of the main base quantities to the different powers [

q ∋ L l ⋅ M m ⋅ T t ⋅ I i ⋅ Θ Θ ⋅ J j ⋅ F f , (9)

− 3 ≤ l ≤ + 3 , − 1 ≤ m ≤ + 1 , − 4 ≤ t ≤ + 4 , − 2 ≤ i ≤ + 2 , − 4 ≤ θ ≤ + 4 , − 1 ≤ j ≤ + 1 , − 1 ≤ f ≤ + 1. (10)

( z ′ − β ′ ) L M T θ I = ( е l ⋅ е m ⋅ е t ⋅ е θ ⋅ е i − 1 ) / 2 − 5 = 4247 , (11)

γ L M T θ I = ( z ″ − β ″ ) L M T θ I = ( z ′ − β ′ ) L M T θ I 2 / μ SI = 4247 2 / 38265 ≈ 471 , (12)

where l , m , ⋯ , f are the exponents of the base quantities, taking only integer values and which vary in certain intervals, Equation (10); γ_{LMTθI} is an optimal number of criteria in a model inherent in CoP_{SI} ≡ LMTθI; “−1” corresponds “to the case where the exponents of all the base quantities are zero in Equation (9); 5 corresponds to the five base quantities L, M, T, θ, and I; and division by 2 indicates that there are direct and inverse quantities, e.g., L^{1} is the length and L^{−1} is the run length. The object can be judged based on the knowledge of only one of its symmetrical parts, while the other parts that structurally duplicate this one may be regarded as information empty. Therefore, the number of options of dimensions is reduced by a factor of two.

Then, one can calculate the minimum achievable comparative uncertainty ε_{LMTθI}:

ε L M T θ I = ( Δ u / S ) L M T θ I = 4247 / 38265 + 471 / 4247 = 0.222. (13)

Using calculations similar to (10)-(13), it is possible to calculate achievable comparative uncertainties ε_{CoP} and the recommended number of quantities γ_{CoP} corresponding to different classes of phenomena (

Thus, there is provided an amazing opportunity to calculate r_{CoP} by two methodologies in the framework of the information-based approach.

The first, dictated by the μ-rule, is to analyze the data on the value of the achievable relative uncertainty at the moment, considering the latest measurement results. In this case, the possible interval of placement of the physical constant S is selected as the difference between its maximum and minimum values

CoP_{SI} | Comparative uncertainty, ε_{CoP} | Optimal number of criteria, γ_{CoP} |
---|---|---|

LMТ | 0.0048 | 0.2 < 1 |

LMТF | 0.0145 | ≌2 |

LMТI | 0.0245 | ≌6 |

LMТθ | 0.0442 | ≌19 |

LMТIF | 0.0738 | ≌52 |

LMТθF | 0.1331 | ≌169 |

LMТθI | 0.2220 | ≌471 |

measured by various scientific groups over a certain period of time. Thus, using the achievable comparative uncertainty inherent in the selected class of phenomena when measuring the physical constant, we can calculate the recommended minimum relative uncertainty, which is compared with the relative uncertainty of each published study. Moreover, the apparent randomness of the choice of the interval value S, depending on the dataset, does not ultimately affect the final result: an extended range of variation of the value of S only indicates the imperfection of the measuring instruments, which leads to a significant increase in relative uncertainty. This can be illustrated by Equation (14), which indicates that the value of the relative uncertainty is finite and not equal to zero.

r 1 / r 2 = ( Δ 1 / A 1 ) / ( Δ 2 / A 2 ) = ( ( ε 1 ⋅ S 1 ) / A 1 ) / ( ( ε 2 ⋅ S 2 ) / A 2 ) = ( ε 1 ⋅ S 1 ⋅ A 2 ) / ( ε 2 ⋅ S 2 ⋅ A 1 ) = ( ( z ′ − β ′ ) 1 / μ + γ CoP / ( z ′ − β ′ ) 1 ) ⋅ S 1 ⋅ A 2 / ( ( ( z ′ − β ′ ) 2 / μ + γ CoP / ( z ′ − β ′ ) 2 ) ⋅ S 2 ⋅ A 1 ) ≡ finitevalue (14)

where r_{1}, r_{2}, Δ_{1}, Δ_{2}, ε_{1}, ε_{2}, S_{1}, S_{2}, A_{1}, A_{2} are the relative, absolute, and comparative uncertainties, intervals of placement of the physical constant, and magnitudes of physical constants, respectively; index 1 corresponds to a larger interval, index 2 corresponds to a shorter interval, S_{2} < S_{1}.

Assuming that ( z ′ − β ′ ) 1 = ( z ′ − β ′ ) 2 and γ CoP ⋅ μ ≫ 1 (look at

r 1 / r 2 ≈ S 1 ⋅ A 2 / ( S 2 ⋅ A 1 ) (15)

Equation (15) indicates that the ratio r_{1}/r_{2} does not tend to infinity or to zero, its value is finite and reflects an increase in the accuracy of instruments when measuring a physical constant. An important advantage of this approach is the independence of the real instability of the results of experimental measurements.

Although the goal of our work is to obtain the main restriction on the accuracy of measuring physical constants, we may also ask whether it is possible to achieve this limit in a physically correctly formulated model. Because our estimate is given by optimization in comparison with the achieved comparative uncertainty and observation interval, it is clear that in the practical case the limit cannot be reached. This is because there is an inevitable primordial uncertainty of the model, depending on the preferences of the researcher, based on his intuition, knowledge, and experience. The magnitude of this uncertainty is an indication of how likely it is that your personal philosophical inclinations will influence the outcome of this process. When a person mentally builds a model, at each stage of its construction there is some probability that the model will not correspond to this phenomenon with a high degree of accuracy.

In what follows, this method is denoted as IARU and is represented by the below-mentioned procedure [

1) From the published data of each experiment, the value α, relative uncertainty r_{α}, and standard uncertainty u_{α} (possible interval of placing) of the physical constant are chosen;

2) The experimental absolute uncertainty Δ_{α} is calculated by multiplying the physical constant value z and its relative uncertainty r_{α} attained during the experiment, Δ α = α ⋅ r α ;

3) The maximum α_{max} and minimum α_{min} values of the measured physical constant are selected from the list of measured values α_{i} of the physical constant mentioned in different studies;

4) As a possible interval for placing the observed constant S_{α}, the difference between the maximum and minimum values is calculated, S α = α max − α min ;

5) The selected comparative uncertainty ε_{CoP} (_{α} to obtain the absolute experimental uncertainty value Δ_{IARU} in accordance with the IARU, Δ I A R U = ε CoP ⋅ S α ;

6) To calculate the relative uncertainty r_{IARU} in accordance with the IARU, this absolute uncertainty Δ_{IARU} is divided by the arithmetic mean of the selected maximum and minimum values, r I A R U = Δ I A R U / ( ( α max + α min ) / 2 ) ;

7) The relative uncertainty obtained, r_{IARU}, is compared with the experimental relative uncertainties r_{i} achieved in various studies;

8) According to IARU, a comparative experimental uncertainty of each study ε_{IARUi} is calculated by dividing the experimental absolute uncertainty of each study Δ_{α} by the difference between the maximum and minimum values of the measured constant S_{α}, ε_{IARUi} = Δ_{α}/S_{α}. These calculated comparative uncertainties are also compared with the selected comparative uncertainty ε_{CoP} (

As follows from the presented description of the step-by-step procedure, the results do not depend on the complex, difficult to fulfill requirements inherent in statistical-expert methods (SEM), such as, for example, the CODATA method [

In the second technique, S is determined by the limits of the used measuring instruments [

Then, the ratio between the absolute uncertainty achieved in the experiment and the standard uncertainty, which acts as a possible interval for the placement of the physical constant, is calculated. Thus, in the framework of the information approach, the comparative uncertainties achieved in the studies are calculated, which, in turn, are compared with the theoretically achievable comparative uncertainty inherent in the chosen class of phenomena. This method is hereinafter referred to as IACU and includes the following steps:

1) From the published data of each experiment, the value α, relative uncertainty r_{α}, and standard uncertainty u_{α} (possible interval of placing) of the physical constant are chosen;

2) The experimental absolute uncertainty Δ_{α} is calculated by multiplying the physical constant value α and its relative uncertainty r_{α} attained during the experiment, Δ α = α ⋅ r α ;

3) The achieved experimental comparative uncertainty of each published study ε_{IACUi} is calculated by dividing the experimental absolute uncertainty Δ_{α} by the standard uncertainty u_{α}, ε_{IACUi} = Δ_{α}/u_{α};

4) The experimental calculated comparative uncertainty of each published study ε_{IACUi} is compared with the selected comparative uncertainty ε_{CoP} inherent in the model (

It should be noted that this methodology also does not require consistent experimental results. From the point of view of its physical content, the IACU reflects the situation, how thoroughly all possible sources of uncertainties for a certain class of phenomena were identified and considered in calculations using different methods of measuring a specific physical constant (Section 3.2).

In the next section, we will present the results of applying the information approach to analyze the measurement data of various physical constants using different methods. In the proposed analysis, only publications are considered that contain data on the value of a physical constant, its relative, and standard uncertainties.

As an example of the visual step-by-step application of the information approach, we consider the results of measuring the Boltzmann constant using the method of an acoustic gas thermometer (CoP_{SI} ≡ LMTθF). One of the many datasets can be found in [

We will apply IARU and IACU to calculate the estimated observation interval of k, S_{k}, according to IARU, its values obtained in two projects were selected: k_{max} = 1.3806508 × 10^{−23} m^{2}·kg·s^{−2}·K^{−1} [_{min} = 1.3806484 × 10^{−23} m^{2}·kg·s^{−2}·K^{−1} [

S k = k max − k min = 2.4 × 10 − 29 m 2 ⋅ kg / ( s 2 ⋅ K ) . (16)

One can calculate the comparative uncertainty ε_{LMTθF} and the lowest relative uncertainty r_{LMTθF} taking into account Equations (1), (6), (8), (10), and (16):

( z ′ − β ′ ) L M T θ F = ( е l ⋅ е m ⋅ е t ⋅ е θ ⋅ е f − 1 ) / 2 − 5 = ( 7 × 3 × 9 × 9 × 3 − 1 ) / 2 − 5 = 2546 , (17)

γ L M T θ F = ( z ″ − β ″ ) L M T θ F = ( z ′ − β ′ ) 2 / μ SI = 2546 2 / 38265 ≈ 169 , (18)

ε L M T θ F = ( Δ u / S ) L M T θ F = 2546 / 38265 + 169.4 / 2546 = 0.1331. (19)

Δ L M T θ F = ε L M T θ F ⋅ S k = 0.1331 × 2.4 × 10 − 29 = 3.2 × 10 − 30 m 2 ⋅ kg / ( s 2 ⋅ K ) . (20)

r L M T θ F = Δ L M T θ F / ( ( k max + k min ) / 2 ) = 3.2 × 10 − 30 / 1.38064961 × 10 − 23 = 2.3 × 10 − 7 . (21)

where “−1” corresponds to the case where all the exponents of the base quantities are zero in Equation (9); 5 corresponds to the five base quantities L, M, T, Q, and F; Δ_{LMTθF} is the absolute uncertainty.

The value of r_{LMTθF} = 2.3 × 10^{−7} calculated by IARU is in sufficient agreement with 6.0 × 10^{−7} [^{−7} [

Year | Boltzmann’s constant | Achieved relative uncertainty | Absolute uncertainty | k_{b} possible interval of placing* | Calculated comparative uncertainty | Calculated comparative uncertainty | Ref. |
---|---|---|---|---|---|---|---|

k · 10^{23} m^{2} kg/(s^{2} K) | r_{k} · 10^{6} | Δ_{k} · 10^{29} m^{2} kg/(s^{2} K) | u_{k} · 10^{29} m^{2} kg/(s^{2} K) | ε ′ k = Δ k / u k IACU | ε ″ k = Δ k / S k IARU | ||

2009 | 1.3806495 | 2.7 | 3.73 | 7.4 | 0.5038 | 1.1393 | [ |

2010 | 1.3806496 | 3.1 | 4.28 | 8.8 | 0.4864 | 1.3081 | [ |

2015 | 1.3806487 | 2.0 | 2.76 | 2.7 | 1.0227 | 0.8439 | [ |

2015 | 1.3806508 | 1.1 | 1.52 | 2.9 | 0.5237 | 0.4642 | [ |

2017 | 1.3806488 | 0.6 | 0.83 | 1.6 | 0.5177 | 0.2532 | [ |

2017 | 1.3806486 | 0.7 | 0.97 | 2.0 | 0.4832 | 0.2954 | [ |

2017 | 1.3806484 | 2.0 | 2.76 | 5.5 | 0.5020 | 0.8439 | [ |

*Data are introduced in [

μ-rule, when the experimentally achieved relative uncertainty is always greater than that calculated by the information approach (Section 2.4). Furthermore, data introduced in

1) Although the authors of publications declared that they considered all the possible sources of uncertainty, the values of absolute and relative uncertainties can still differ by more than a factor of two. A similar situation exists in the spread of the values of comparative uncertainties (IARU). This reflects the existence of hidden uncertainties that have eluded the attention of researchers.

2) The results from the use of IACU indicate a relative agreement between the magnitude of the experimental comparative uncertainties and their significant discrepancy (more than 3 - 4 times) compared with the recommended one (0.1331). This situation is explained by the fact that, on the one hand, research teams learn from each other in the search and elimination of undetected or unaccounted for uncertainties, thereby ensuring the relative uniformity of the magnitude of experimental comparative uncertainty. On the other hand, it should be considered that the idea of an acoustic gas thermometer method is based on the concept of an ideal gas, although the interaction between gas particles is not well understood. An additional difficulty is associated with measuring the molar concentration of gas per unit volume and volume itself with a competitive degree of accuracy. It should also be noted that the total volume includes the volume of the connecting pipes to the pressure gauges. Therefore, there may be significant unaccounted uncertainties due to both the formulation of the experimental model and the achievable accuracy of the values considered in the calculation. Moreover, the proximity of the acoustic mode to shell resonance leads to an unacceptably large degree of data violation for this mode. In addition, experimenters consider a much smaller number of variables compared with the recommended ones (see

Because the step-by-step procedure for applying the information approach was described in detail in Section 3.1, generalized information on the data sets of measurements of the Planck constant, Boltzmann constant, Hubble constant, and gravitational constant is presented below (

Looking closer at the data entered, we can make the following comments.

1) In measuring the Planck constant, h, when moving from a model (LMТF) to CoP_{SI} with a large number of dimensionless criteria (LMТI), the comparative uncertainty increases. This change is due to the potential effects of the interaction between an increased number of variables that may or may not be considered by the researcher. At the same time, the r_{exp}/r_{SI} ratio for CoP_{SI} ≡ LMТI is much smaller, which indicates the advantage of the Kibble balance method for measuring the Planck constant. This is also confirmed by the significant difference in the

Physical constant/ Publications interval | Measurement method | CoP | Comparative uncertainty according to CoP_{SI}, ε_{SI} | Achieved experimental lowest comparative uncertainty (IACU), ε_{exp} | Ratio ε_{exp}/ε_{SI} | Relative uncertainty according to CoP_{SI} (IARU), r_{SI} | Achieved experimental lowest relative uncertainty, r_{exp} | Ratio r_{exp}/r_{SI} | |
---|---|---|---|---|---|---|---|---|---|

Planck constant, h 2009-2017 | a | KB^{1} | LMТI | 0.0245 | 0.3976 [ | 15.9 | 4.5 × 10^{−9} [ | 1.3 × 10^{−8} [ | 2.9 |

XRCD^{2} | LMТF | 0.0145 | 0.4733 [ | 32.6 | 1.0 × 10^{−9} [ | 9.1 × 10^{−9} [ | 9.1 | ||

Boltzmann constant, k_{b} 2009-2018 | b | AGT^{3} | LMТθF | 0.1331 | 0.4832 [ | 3.6 | 2.3 × 10^{−7} [ | 6.0 × 10^{−7} [ | 2.6 |

DCGT^{4} | LMТθI | 0.2220 | 0.5044 [ | 2.3 | 4.3 × 10^{−7} [ | 3.7 × 10^{−7} [ | 0.9 | ||

JNT^{5} | LMТθI | 0.2220 | no data | no data | 1.4 × 10^{−6} ref. [ | 2.7 × 10^{−6} [ | 1.9 | ||

DBT^{6} | LMТθF | 0.1331 | no data | no data | 2.1 × 10^{−5} [ | 2.4 × 10^{−5} [ | 1.1 | ||

Hubble constant, H_{0} 2009-2019 | c | BDL^{7} | LMT | 0.0048 | 0.3409 ref. [ | 710 | 2.3 × 10^{−4} [ | 1.0 × 10^{−2} [ | 44 |

CMB^{8} | LMTθ | 0.0442 | 0.1818 [ | 4.1 | 2.9 × 10^{−3} [ | 7.0 × 10^{−3} [ | 2.4 | ||

BAO^{9} | LMT | 0.0048 | 0.5 ref. [ | 104 | 1.8 × 10^{−4} [ | 1.0 × 10^{−2} [ | 56 | ||

Gravitational constant, G 2000-2018 | d | Mechanical methods^{10 } | LMT | 0.0048 | 0.4819 [ | 100 | 1.5 × 10^{−6} [ | 1.9 × 10^{−5} [ | 12.7 |

Electromechanical methods^{11} | LMТI | 0.0245 | 0.1930 [ | 7.9 | 6.3 × 10^{−6} [ | 1.2 × 10^{−5} [ | 1.9 |

^{1}KB—Kibble balance. Data include results of measurements taken in seven laboratories from 2014 to 2017. ^{2}XRCD—X-ray crystal density. Data include results of measurements taken in seven laboratories from 2011 to 2018. ^{3}AGT—acoustic gas thermometer. Data include results of measurements taken in seven laboratories from 2009 to 2017. ^{4}DCGT – dielectric constant gas thermometer. Data include results of measurements taken in six laboratories from 2012 to 2018. ^{5}JNT—Johnson noise thermometer. Data include results of measurements taken in six laboratories from 2011 to 2017. ^{6}DBT—Doppler broadening thermometer. Data include results of measurements taken in six laboratories from 2007 to 2015. ^{7}BDL—brightness of distance ladder. Data include results of measurements taken in seven laboratories from 2011 to 2019. ^{8}CMB—cosmic microwave background. Data include results of measurements taken in six laboratories from 2009 to 2018. ^{9}BAO—baryonic acoustic oscillations. Data include results of measurements taken in four laboratories from 2014 to 2018. ^{10}Data include results of measurements taken in seven laboratories from 2000 to 2014. ^{11}Data include results of measurements taken in five laboratories from 2001 to 2018.

value of the comparative uncertainty CoP_{SI} ≡ LMТF (XRCD - 0.0146) compared with CoP_{SI} ≡ LMTI (KB – 0.0245) with almost equal experimental relative uncertainties achieved (1.3 × 10^{−8} ≈ 1.2 × 10^{−8}).

As stated in [_{exp}/r_{SI} ratio (LMТI: 2.9, LMТF: 9.1), there is an urgent need to reduce the influence of sources of uncertainty for XRCD.

It should be noted that, in the framework of the information approach, the statement that “after the Planck constant is constant (an exact number with zero uncertainty...)” [

2) The data from _{SI}, calculated in accordance with the information approach, differ by two orders of magnitude for different methods of measuring the Boltzmann constant k! That is why, in the framework of the information approach, in contrast to the concept approved by CODATA, it is not recommended to determine and declare only one value of relative uncertainty when measuring the Boltzmann constant (and other constants) by various methods.

Using an information-oriented approach, both a respected scientist and a simple engineer can easily identify the advantages or disadvantages of a particular measurement method. Thus, analyzing the data of _{exp}/r_{SI} (1.9) and (1.1). At the same time, the achieved experimental least relative uncertainty of 3.7 × 10^{−7}, realized using DCGT, is doubtful. This is explained by the requirement of the μ-rule, according to which the theoretically calculated relative uncertainty (4.3 × 10^{−7}) is always less than the experimental one (3.7 × 10^{−7}). Therefore, researchers of [

3) From _{0} using BDL and BAO (CoP_{SI} ≡ LMТ), the experimental relative uncertainties (0.01 [_{SI} ≡ LMT cannot be used in the future. Therefore, the conviction of scientists in accounting for all possible sources of uncertainties is far from providing a guarantee of achieving the true value of Н_{0} by these two methods.

Following the logic of the information approach, it is again necessary to recognize that the method of measuring H_{0} using the cosmic microwave background is the most promising, theoretically justified, and implements the most reliable experimental data. This conclusion can be confirmed by calculating the ratio ε_{SI}/r_{exp} considering the data in

( ε SI / r exp ) BAO = 0.0048 / 0.01 = 0.05 , ( ε SI / r exp ) BDL = 0.0048 / 0.01 = 0.05 , ( ε SI / r exp ) CMB = 0.0442 / 0.007 = 6.3 , ( ε SI / r exp ) BAO = ( ε SI / r exp ) BDL < ( ε SI / r exp ) CMB . (22)

Relation (22) reflects the fact that the best accuracy in measuring the Hubble constant can be achieved for the class of phenomena with a large number of base quantities.

Data of _{exp} exceed the recommended r_{SI} by 43 and 56 times for BDL and BAO, although when measuring H_{0} with CMB, r_{exp}/r_{SI} = 2.4. Because consistency is one of the basic requirements for analyzing results, the current situation needs to be explained. The information approach declares that the inevitable primordial absolute uncertainty of the model already exists in the process of developing a method for measuring a physical constant. That is why, when making predictions to increase the accuracy of the Hubble constant, great caution should be exercised. The fact is that with an increase in the number of observed space objects, according to most astronomers using various methods of calculating H_{0}, absolute (ideal) statistical stability of the observed parameters and characteristics of any physical phenomena (real events, processes, and fields) is achieved. However, as was proved [

4) The huge difference between the achieved experimental relative uncertainty in measuring the gravitational constant by means of mechanical methods and the theoretically recommended (r_{exp}/r_{SI} = 12.7) confirms the thesis of the information approach about the inappropriateness of their use in determining the true value of the gravitational constant. At the same time, a higher measurement accuracy of G was achieved using electromechanical methods: r_{exp}/r_{SI} = 1.9. From the point of view of the information approach, further clarification of the true value of the gravitational constant and a decrease in the experimental relative uncertainty is possible when using models and measurement methods with a large number of base quantities, for example, CoP_{SI} ≡ LMТθI.

In addition to the comments made in 1) - 4), one can make the following comments and conclusions, which are sometimes not obvious and do not coincide with the provisions of the generally accepted CODATA methodology.

a) The values of the minimum attainable comparative and relative uncertainties calculated according to the information approach depend on the choice of class of phenomena. Theory can predict their value. It is important to note that during the transition from the mechanistic model (LMТ) to CoP_{SI} with a large number of base variables, the uncertainty increases. This is explained by a change in the number of potential interaction effects between an increased number of quantities that may or may not be considered by the researcher.

b) You may notice large differences in the level of consistency between ε_{SI} and ε_{exp} calculated according to IACU. This level can be called a “coefficient of consistency” for a physical constant measured by various methods. In particular, when measuring Н_{0}, the ratio ε_{exp}/ε_{SI} is 710 (BDL) and 104 (BAO), while using CMB this ratio is only 4.1. A similar situation exists for measuring the gravitational constant: ε_{exp}/ε_{SI} = 100 when implementing mechanical methods, and ε_{exp}/ε_{SI} = 7.9 using electromechanical methods. At the same time, when measuring the Planck constant with KB and XRCD and using AGT and DCGT to calculate the Boltzmann constant, the values of the ε_{exp}/ε_{SI} ratios are very close to each other. As part of the information approach, this situation indicates that the BDL, BAO, and mechanical methods for G have limited use, and it can even be argued that they are not recommended for use. Moreover, using simple relationships calculated in accordance with a theoretically sound approach, we can draw very serious and far-reaching conclusions. It is important to emphasize once again that, using the IACU, researchers can find out for which method of measuring the physical constant it is necessary to continue the search for all possible sources of uncertainties. Thus, the ratio ε_{exp}/ε_{SI} is an objective criterion for assessing the achieved accuracy when comparing different methods of measuring one specific physical constant.

c) The introduction of comparative uncertainty, through the IARU, to evaluate the accuracy of measurements of physical constants allows the calculation of the r_{exp}/r_{SI} ratio. From the data in _{exp}/r_{SI} ratio varies from 0.9 to 2.9. Thus, in the framework of the information approach, we can consider the r_{exp}/r_{SI} ratio as a universal indicator of the achievements of scientists in measuring any physical constant using a variety of methods.

Thanks to an amazing combination of information theory, which is strictly thought out and equipped with an excellent mathematical apparatus, with a carefully selected and verified database of experimental physics, it became possible to calculate the accuracy limit for measuring physical constants. This approach is realized without any statistical methods, weighted coefficients, and criteria of consistency.

Being unsatisfied with the statistical evaluation of measurements of physical constants, the author looked for an approach in which mathematical and logical difficulties are solved by simple definitions and calculations that are easy to understand. The author suspects that the information approach may also shed new light on old difficulties. It is generally taken for granted that if there is already a method that has been tested and accepted by the scientific community, there is nothing to look for better than good. You may need to look for descriptions where the situation is simpler. Perhaps this should not be much more complicated than SEM.

One of the key concepts of the information approach is the application of the concept of complexity using the theory of information to the International System of Units, which is the result of the intellectual activity of scientists and does not exist in nature. We use the concept of complexity to measure the amount of information contained in the measurement model of a specific physical variable, and then use SI with seven base quantities to classify the classes of phenomena inherent in a particular measurement method. The proposed informational approach has the advantage that it takes into account both the physical nature of the experiment (a qualitative set of base quantities) and information content due to the specific number of variables taken into account in the model. In addition, the proposed measure of the proximity of the model to a real object (comparative uncertainty) can be used for any data set without requiring consistent results.

Comparative uncertainty is when traditional statistical methods used to process data sets of physical measurement results fail! Compared to the CODATA technique, the information approach has two obvious additional advantages. The first is that the information approach has the predictability property (studying the extent to which events can be predicted [

The author notes that it is likely, at least philosophically, more acceptable, that the value of the relative uncertainty of the measurement of the physical constant is really clearly defined by a theoretically proven and simply implemented information method, as opposed to a statistical and expert assessment. It would be premature to argue that this would contradict, in everyday life, the Ockham principle (entities should not be introduced except when strictly necessary [

From the above it follows that comparative uncertainty is inherent in any data sets when analyzing measurements of physical constants, which is an additional justification for clarifying standard practice. This uncertainty is always present and cannot be eliminated by standard data analysis, so measurements of physical constants may be misinterpreted in future experiments with greater accuracy.

When considering mathematical modeling of the process of measuring a physical constant, the question is whether physics should obey mathematical SEM or adhere more closely to observations and data [

Thus, it turns out that the problem that researchers face in the process of calculating relative uncertainty, which allows us to confirm the true value of a physical constant, ultimately boils down to the problem of choosing a model of the class of a phenomenon for the measurement process. With this formulation of the question, limitations arise due to the human mind, namely the knowledge, experience, and intuition of the researcher. The elimination of such limitations, as we have seen, can be successfully implemented using the information approach, which can be considered the main tool for assessing the accuracy of measuring a physical constant.

In this study, we presented the possibility of applying the concept of information to the problem of assessing the accuracy of measuring a physical constant. One of the important conclusions is that the amount of information in the model is the key to understanding the physical meaning of the threshold mismatch between the result of the experiment and the mathematical representation of the measurement process. This conclusion is consistent with the idea that the fundamental task of evaluating calculation accuracy is to select a channel for transmitting information through a model that developers choose in accordance with their experience, knowledge and intuition. The choice of the structure of the model and its class of phenomena leads to a situation where there is an inevitable measurement uncertainty. Researchers can no longer ignore or eliminate it, since future studies on the measurement of physical constants may incorrectly interpret the results.

A reliable, information-oriented, theoretically substantiated approach is proposed for calculating the relative uncertainty when measuring a physical constant. This approach uses the comparative uncertainty inherent in any measurement model of the measurement process, the value of which is due to a qualitative set of base quantities and the total number of derived variables. The approach is not based on the assumption of a Gaussian distribution and is applicable to the analysis of results obtained both for a long and a short period of time.

Calculated in accordance with the IARU for CoP = LMTθF, the relative uncertainty (2.3 × 10^{−7}) of the Boltzmann constant measurement using the acoustic gas thermometer method is close to the smallest achieved experimental uncertainty of 3.7 × 10^{−7} [_{SI} is carried out for a very short period of time, incomparably smaller than by the CODATA method.

The proposed approach was used to estimate the relative and comparative uncertainties when measuring the Planck constant, Boltzmann constant, Hubble constant, and gravitational constant according to the results of studies published for the period 2000-2019.

The ratio of the minimum achieved experimental comparative uncertainty to the theoretically calculated one revealed the unsuitability of using a model with a small number of base quantities (LMТ and LMТF) for measuring the Planck constant, the Hubble constant, and the gravitational constant. The ratio ε_{exp}/ε_{SI} is an objective criterion for assessing the achieved accuracy when comparing various methods of measuring one specific physical constant.

When using models with a large number of base quantities, for example, LMTI or LMTθF, the ratio of the minimum experimental relative uncertainty achieved to the theoretically calculated r_{exp}/r_{SI} varies from 0.9 to 2.9, which indicates the suitability of these methods for measuring physical constants. At the same time, r_{exp}/r_{SI} varies from 9 to 56 for models with a low number of base quantities, which is unacceptably high for practical use. Thus, in the framework of the information approach, r_{exp}/r_{SI} can be considered as a universal metric for assessing the practical level of accuracy when measuring any physical constants using various methods.

It should be noted that the application of the information approach allows us to make a very non-trivial conclusion: when measuring physical constants using various methods, it is not recommended to state only one value of relative uncertainty.

The author understands that the stated conclusions can be hardly accepted by part of the scientific community, since they do not fit into the generally accepted point of view. However, the author hopes that readers will find the time and desire to identify possible contradictions or fundamental shortcomings of the proposed method. At the same time, the presented results do not in any way abolish the basic principles of measurement theory, which always remain valid, but must be used separately at the further stage of the model implementation.

A rigorous analysis of the data presented, confirmed by numerical results, shows that the proposed method is not only reliable and robust, but also effective. The results of the study do not reject the possibility of applying an information-oriented approach to the calculation of the relative uncertainty in the measurement of physical constants, and the constantly obtained new evidence is exclusively in its favor.

In this time of uncertainty, it is very important to understand the origins of the human “fuzzy” perception of the world around us. According to the author, it is the information-theoretic approach that allows us to understand the physical reasons why we, whether we want it or not, see the object under study in the “fog” of errors and doubts.

The author declares no conflicts of interest regarding the publication of this paper.

Menin, B. (2020) High Accuracy When Measuring Physical Constants: From the Perspective of the Information-Theoretic Approach. Journal of Applied Mathematics and Physics, 8, 861-887. https://doi.org/10.4236/jamp.2020.85067