High Accuracy When Measuring Physical Constants: From the Perspective of the Information-Theoretic Approach

The practical value of high-precision models of the studied physical phenomena and technological processes is a decisive factor in science and technology. Currently, numerous methods and criteria for optimizing models have been proposed. However, the classification of measurement uncertainties due to the number of variables taken into account and their qualitative choice is still not given sufficient attention. The goal is to develop a new criterion suitable for any groups of experimental data obtained as a result of applying various measurement methods. Using the “information-theoretic method”, we propose two procedures for analyzing experimental results using a quantitative indicator to calculate the relative uncertainty of the measurement model, which, in turn, determines the legitimacy of the declared value of a physical constant. The presented procedure is used to analyze the results of measurements of the Boltzmann constant, Planck constant, Hubble constant and gravitational constant.


Introduction
Any modern scientific research is based on physical laws in which there are numerical constants having specific and universally used symbols. Their exact values are necessary to make reliable, verifiable forecasts about the structure of the world around us. Second, checking their numerical values through complex ex-periments allows us to identify the consistency and acceptability of a particular physical theory. These quantities are called physical constants.
When scientists take measurements or calculate some physical constant based on their data, they usually indicate the range of values within which this "true value" is located with a given probability. The result is not only a number, but also a number with a measurement uncertainty [1] [2]. In this case, the analysis of experimental data requires a careful selection of the mathematical apparatus for a balanced objective assessment of the available results from the position of their consistency. For this, a metric is selected that can represent the quantitative value of this consistency. Moreover, it is well known that with an incorrectly chosen metric for evaluating the available data on the measurement of physical constants, the expected analysis efficiency will turn out to be low, which will lead to erroneous conclusions. To increase the credibility of a suitable metric, various statistical methods are used.
As an example, we can draw attention to existing statistical methods used to estimate the Hubble constant. As shown in [3], these methods only partially consider random measurement errors, which leads to a situation in which estimates of the Hubble constant, H 0 , are statistically inconsistent and systematically too low. In particular, one of the methods used to calculate the Hubble constant is based on the study of type Ia supernovae using a remote ladder. Supernovae are known as "standard candles" because they produce constant peak brightness values. Because the magnitude of the observed brightness depends on the distance from the supernova, this can be used to measure its distance from the Earth. The process of measuring distance is very complicated. It is based on the calibration of the distance ladder, which leads to significant uncertainty, both systematic and random. The systematic error is the larger of the two, and depending on whether the error is positive or negative, the Hubble constant is underestimated or overestimated, respectively. The random part of the error leads to the fact that some objects at measured distances are too large, while others are too small. Contrary to what one might think, these errors on average do not cancel but lead to a systematically too low a value of H 0 [3].
More specifically, various statistical methods are used to estimate the Hubble constant, including weighted regression analysis and Bayesian analysis, which allow you to include other available data sources. These methods are a continuation of the usual least-squares model, in which speed regresses at a distance and where the estimate is found by minimizing the standard error. Common to the three methods-least-squares model, weighted regression, and Bayesian analysis-is that the error in the estimation does not disappear and does not even decrease when there are more speed/distance measurements [3].
Another example closed to applying statistical methods for verifying a magnitude of the physical constant is a realization of the International System of Units (SI). One of the outstanding scientific achievements of the 21 st century is the approval of a new version of SI [4]. From 2019, this system includes seven The main target to use LSA is to fit models to measurements that are accompanied by quoted uncertainties. The weights are chosen in dependence on these uncertainties. The advantage of LSA is that its score corresponds to a decision on maximum probability. This allows us to obtain guarantees for estimating the maximum probability (consistency, asymptotic normality). This then allows us to build hypothesis tests and obtain confidence intervals for the estimated regression coefficients.
A distinctive feature of the LSA method is that it is aimed at checking the consistency of the results, and for this, the initial experimental values are "adjusted," that is, changed to optimize the final dispersion of the set. In the case of conflicting results, the associated uncertainties are increased in the CODATA analysis [7].
In this case, there is presented a biased statistical expert, motivated by personal convictions or preferences [8]. It means that the method involves an element of subjective judgment [9]. In other words, the CODATA concept is not without drawbacks: a statistically significant trend, the aggregate value of consensus, statistical control, underestimated uncertainties, or the significance of an expert judgment. Perhaps the CODATA values have not yet stabilized [5]. Moreover, in the case of conflicting results, the associated uncertainties increase in the CODATA analysis [7].
However, one should not underestimate the significant efforts of scientists to avoid the above effects. The fact is that the determination of each physical constant using a special CODATA adjustment usually includes the results of measurements of various independent research groups working on the problem of measuring the physical constant for decades. The goal of the coordinated efforts of scientists was to guarantee a situation where systematic effects were not missed.
To summarize the above, it is necessary to pay attention to one important feature inherent in all methods of analysis of experimental data and uncertainties in the measurement of physical constants. Systematic uncertainties arising from the idealization of modeling and due to philosophical and scientific preferences of researchers are completely ignored. In other words, the choice of the B. Menin model of the measuring process is subjective in nature, depending on the consciousness of the researcher with his preferences in choosing a quantitative and qualitative set of variables taken into account. This fact complicates the already complex process of checking the model by creating an uncertain target-a situation in which neither the simulated nor the observed behavior of the system is precisely known.
Therefore, when we talk about the level of accuracy of measuring physical constants, we must understand that modern measuring models, test benches, and calculation algorithms have become a very powerful and accurate tool since 2010 [10]. This is true with large reservations precisely because they are based on a large number of assumptions. As a result, to understand carefully obtained results, it is necessary to find a theoretically substantiated method that does not use "weighted estimates and coefficients".
The fact is that some uncertainties in the experimental results are due to the philosophy of researchers. They either report unjustifiably large errors so that they are not blamed for a wrong approach, or underestimate errors, unconsciously wanting to present the best result (the author is far from suspecting research teams of a scientific adjustment of facts). That is life. Therefore, a method is needed that excludes the subjective component of the measurement process. We show that with the help of concepts and the mathematical apparatus of information theory, it is possible, theoretically and without any additional assumptions and simplifications, to calculate the amount of information contained in the measurement model of the physical constant. This circumstance allows us to establish the value of relative uncertainty, which, in turn, determines the legitimacy of the declared value of the physical constant. We also present specific examples of the application of the described information approach. The presented procedure for calculating relative uncertainty is used to analyze the results of measurements of the Boltzmann constant, Planck constant, Hubble constant, and gravitational constant. Analysis of publications and all necessary calculations were carried out in the office of Mechanical & Refrigeration Consultation expert (Beer-Sheba, Israel).

Preliminary Notes
It may seem strange that before starting the experiment you need to list all the base quantities used and the total number of variables considered in the model. Moreover, this important point is completely ignored in the canonical CODATA method for calculating the target value of the physical constant and its relative uncertainty. The need for this requirement is explained as follows.
The fact is that any measurement of a variable by itself implies the presence of an already formulated model. As mentioned in [11], a measurement model constitutes a relationship between the output quantities or measurands (the quantities intended to be measured) and the input quantities known to be involved in the measurement. In this case, the researcher, based on his own knowledge, experience, and intuition, uses, as a rule, dimensional and dimensionless variables from the International System of Units (SI). This means, on the one hand, that accounting for one variable or another is equally likely: to describe the phenomenon being studied, the scientist or engineer selects a qualitative and quantitative set of variables as he wishes. The most famous example of such a situation is the possibility of studying an electron as both a particle and a wave. Although two qualitatively different sets of variables are used to describe the motion of an  [13].
Refining the process of formulating the model from the perspective of choosing a specific CoP may offer a new interpretation of the results of measurements of physical constants, which will be discussed later in the article.
In addition, it should be noted that a researcher, choosing a specific CoP, in practice discards possible potential hidden relationships between the variables considered in the model and the ignored variables. Thus, of course, this can affect the accuracy of the proposed model and even lead to an increase in its uncertainty. This is explained by the fact that, although in the opinion of a significant part of the scientific community, the model error can be reduced by using a large number of variables thanks to improved algorithms and supercomputers, each variable introduces its own uncertainty into the total integral error that affects the desired result. However, as the dimension of the model increases, only the reliability of the model results improves [14].
To assess the magnitude of the threshold mismatch [1] between the model and the measurement process under study, due to the choice of CoP, we will give the following reasoning and calculations.

The Amount of Information Contained in a Model
In science and technology, a wide variety of unit systems can be used that are most suitable for a particular application, for example, Imperial and US customary units or Natural units [15]. However, the most widely used system is the international standard metric system-the International System of Units (SI).
Therefore, further reasoning and calculations are given as applied to SI, especially since SI units are also used in the CODATA methodology. However, since SI is an Abelian group [16] [17], like any other system of units, the final conclusions do not depend on the choice of a specific system of units. It can be proved using the concepts and mathematical apparatus of the theory of similarity [12] that SI includes a large but finite number of dimensionless variables [16]: (1) μ SI cannot be simultaneously taken into account in the model. Typically, a researcher uses 10, 20, or even 130 variables with CoP SI ≡ LMTθF [18] to describe the process being studied.
For further reasoning, we indicate that information entropy [19] is manifested through the interaction of the studied physical system and the formulated model. This model is an information channel between the physical system and the observer. As a result, information entropy is subjective, depending on the consciousness of the researcher with his preferences in choosing a quantitative and qualitative set of variables taken into account.
We will use an analogy with a theory of signals transmission. Imagine that the observed measurement process has a huge number of properties (quantities, criteria) that characterize its content and interaction with the environment. Then, we assume that each dimensionless complex represents the original readout (reading [20] [21]), through which some information on the dimensionless researched field u (researched process) can be obtained by the observer. In other words, the researcher observing a physical phenomenon, analyzing the process or designing the device, selects-according to his experience, knowledge and intuition-certain characteristics of the object. With this selecting of the object, connections of the actual object with the environment enveloping it are destroyed. In addition, the modeler takes into account the relatively smaller number of quantities than the current reality due to constraints of time, and technical and financial resources. Therefore, the "image" of the object being studied is shown in the model with a certain uncertainty, which depends primarily on the number of quantities taken into account. In addition, the object can be addressed by different groups of researchers, who use different approaches for solving specific problems and, accordingly, different groups of variables, which differ from each other in quality and quantity. Thus, for any physical or technic-al problem, the occurrence of a particular variable in the model can be considered as a random process.
Then, let there be a situation where all µ SI of SI values can be taken into account, provided that the choice of these quantities is a priori considered equally probable. In this case, we are guided by the idea of Brillouin connecting amount of information obtained in the simulation (observation) without making any disturbance in the measurement process, and the uncertainty inherent in the selected model [20].
Comparing the number of variables in SI with the selected number of variables in a particular model, it turns out that you can calculate the amount of information ΔA e contained in it [16] ( ) where ΔA e is expressed in units of entropy, μ SI includes dimensionless crite- The physical content of Equation (2) is very important. For example, two research groups analyze the process of measuring a physical constant. The results are different from each other. Who presented the most respectable option? Obviously, the choice of the class of the phenomenon and the number of variables considered will affect the information content of the model and will cause a different amount of information contained in it [22]: where ΔA b1γ and ΔA b2γ are an amount of information of the model formulated by the first research team and the second team, respectively, compared with the model that takes into account the optimal number of dimensionless criteria γ CoP inherent to a particular CoP; γ 1 and γ 2 are the number of dimensionless criteria in the first and second models, respectively.  (4) and (5), some readers may suggest that it is preferable to use a B. Menin model with a large number of variables when modeling a physical process. However, this is a wrong conclusion, and here is why. By comparing ΔA b1γ and ΔA b2γ in absolute terms, the researcher can "instantly" determine which one is smaller. This means that the number of dimensionless criteria considered is closer to the optimal one γ CoP corresponding to the minimum comparative uncertainty [20] (for its detailed calculation, see below). Thus, a project with a lower absolute value of i CoP γ γ − is more informative. Therefore, the information approach will significantly reduce the time spent by researchers on the analysis of publications.
It is also advisable to emphasize the importance of introducing the concept of "information content" of the model, ΔA, from the point of view of choosing a specific model of the measurement process.
First, information content can provide a natural explanation for the preferred choice of a particular measurement method. Until now, it was almost impossible to recommend scientists to focus their efforts on a specific method. However, with the introduction of the concept of information content (3)-(5), it is quite possible to state which of the models describing the same method of measuring the physical constant is most preferable.
Secondly, the content of the information may mean that some models of measuring the physical constant are less preferable. Specific examples and detailed explanations are presented in Section 3.
Third, the information content implies that many models are unsuitable for measuring a particular physical constant. In these models, the number of variables does not correspond to the recommended number inherent in the selected class of phenomena. The accuracy of the model, usually associated with the number of variables considered, is seen in a different light when implementing the information approach (Section 3). Fourth, an accurate description of the experimental setup in terms of an information approach requires some knowledge of the future. We know very well that the experiment itself never allows the experimenter to look into the future, but if we try to interpret what is happening, some expectation of the future experiment seems necessary. We suspect that this approach may allow us to reflect a state where some hidden variables that can influence the result are not considered by the researcher's conscious decision (Section 3).

Comparative Uncertainty
The amount of information contained in the model (Equation (2)) is only a sufficient condition for choosing the preferred option. In addition to this, we can formulate a necessary condition. Using Equation (2), we can get an expression for calculating the absolute uncertainty of the model Δ pmm [16], due to the choice of CoP and the number of variables considered in the model: where S is the interval in which the dimensionless quantity u is located, z′ and β′ where S and Δu are the dimensionless quantities, respectively, the range of variations and the total absolute uncertainty in determining the dimensionless quantity u; S * and ΔU are the dimensional quantities, respectively, the range of variations and the total absolute uncertainty in determining the dimensional quantity U; a is the dimensional scale parameter with the same dimension as that of U and S * ; r is the relative uncertainty of the dimensional quantity U; and R is the relative uncertainty of the dimensionless quantity u.
Secondly, Equation (6)  will also have to comply with Equation (6) to maintain the basic relationships between physical variables. Equivalence ensures that physical models of reality remain consistent, regardless of units.
Fourth, the development of measuring equipment, improving the accuracy of measuring instruments, and improving existing and newly created measurement methods in the aggregate lead to an increase in knowledge about the object under study and, therefore, the value of achievable relative uncertainty decreases.
However, this process is not infinite and limited by Equation (6). The reader should keep in mind that this principle is not a lack of measuring equipment or an engineering device, but the way the human brain works. Predicting the behavior of any physical process, physicists actually predict a tangible yield of instrumentation. It is true that, according to the µ-rule, observation is not a measurement but a process that creates a unique physical world in relation to each specific observer.

B. Menin
In addition, using Equation (6), one can find the necessary conditions for approaching the smallest relative uncertainty of each CoP, r CoP , the fulfillment of which can confirm the legitimacy of the declared measured value of the physical constant. For this, it is necessary to take the derivative of Δ pmm /S with respect to z β ′ ′ − and equate it to zero: For example, for the thermal-electromechanical process (CoP SI ≡ LMТθI), which is used in measuring the Boltzmann constant, it is necessary to consider the following statement. The dimension of any derived quantity q can be expressed as a unique combination of dimensions of the main base quantities to the different powers [23]: where , , , l m f  are the exponents of the base quantities, taking only integer values and which vary in certain intervals, Equation (10); γ LMTθI is an optimal number of criteria in a model inherent in CoP SI ≡ LMTθI; "−1" corresponds "to the case where the exponents of all the base quantities are zero in Equation Using calculations similar to (10)- (13), it is possible to calculate achievable comparative uncertainties ε CoP and the recommended number of quantities γ CoP corresponding to different classes of phenomena (Table 1).
Thus, there is provided an amazing opportunity to calculate r CoP by two methodologies in the framework of the information-based approach.

Two μ-Rule Methodologies
The first, dictated by the μ-rule, is to analyze the data on the value of the achievable relative uncertainty at the moment, considering the latest measurement results. In this case, the possible interval of placement of the physical constant S is selected as the difference between its maximum and minimum values fection of the measuring instruments, which leads to a significant increase in relative uncertainty. This can be illustrated by Equation (14), which indicates that the value of the relative uncertainty is finite and not equal to zero.
Assuming that ( ) ( ) at Table 1 and (1)), then ( ) Equation (15) indicates that the ratio r 1 /r 2 does not tend to infinity or to zero, its value is finite and reflects an increase in the accuracy of instruments when measuring a physical constant. An important advantage of this approach is the independence of the real instability of the results of experimental measurements. Although the goal of our work is to obtain the main restriction on the accuracy of measuring physical constants, we may also ask whether it is possible to achieve this limit in a physically correctly formulated model. Because our estimate is given by optimization in comparison with the achieved comparative un-certainty and observation interval, it is clear that in the practical case the limit cannot be reached. This is because there is an inevitable primordial uncertainty of the model, depending on the preferences of the researcher, based on his intuition, knowledge, and experience. The magnitude of this uncertainty is an indication of how likely it is that your personal philosophical inclinations will influence the outcome of this process. When a person mentally builds a model, at each stage of its construction there is some probability that the model will not correspond to this phenomenon with a high degree of accuracy.
In what follows, this method is denoted as IARU and is represented by the below-mentioned procedure [24]. 1) From the published data of each experiment, the value α, relative uncertainty r α , and standard uncertainty u α (possible interval of placing) of the physical constant are chosen; 2) The experimental absolute uncertainty Δ α is calculated by multiplying the physical constant value z and its relative uncertainty r α attained during the expe- 7) The relative uncertainty obtained, r IARU , is compared with the experimental relative uncertainties r i achieved in various studies; 8) According to IARU, a comparative experimental uncertainty of each study ε IARUi is calculated by dividing the experimental absolute uncertainty of each study Δ α by the difference between the maximum and minimum values of the measured constant S α , ε IARUi = Δ α /S α . These calculated comparative uncertainties are also compared with the selected comparative uncertainty ε CoP ( Table 1).
As follows from the presented description of the step-by-step procedure, the results do not depend on the complex, difficult to fulfill requirements inherent in statistical-expert methods (SEM), such as, for example, the CODATA method [6]. Moreover, the physical meaning of IARU is to assess the suitability of a method for measuring a specific physical constant. IARU can also be used to compare achieved measurement accuracy with various methods for different constants (Section 3.2).
In the second technique, S is determined by the limits of the used measuring instruments [20] in each particular experiment. This is confirmed by the fact that in experimental physics, unlike other areas of technology (for example, when studying the processes of heat and mass transfer in refrigeration equipment [22]), the researchers present measurement data with the obligatory indication of the standard uncertainty. At the same time, it is obvious that this uncertainty of a particular measurement is subjective because the observer is simply not able to consider all the uncertainties. The standard uncertainty is calculated considering the uncertainties observed by the experimenters.
Then, the ratio between the absolute uncertainty achieved in the experiment and the standard uncertainty, which acts as a possible interval for the placement of the physical constant, is calculated. Thus, in the framework of the information approach, the comparative uncertainties achieved in the studies are calculated, which, in turn, are compared with the theoretically achievable comparative uncertainty inherent in the chosen class of phenomena. This method is hereinafter referred to as IACU and includes the following steps: 1) From the published data of each experiment, the value α, relative uncertainty r α , and standard uncertainty u α (possible interval of placing) of the physical constant are chosen; 2) The experimental absolute uncertainty Δ α is calculated by multiplying the physical constant value α and its relative uncertainty r α attained during the experiment, r α α α ∆ = ⋅ ; 3) The achieved experimental comparative uncertainty of each published study ε IACUi is calculated by dividing the experimental absolute uncertainty Δ α by the standard uncertainty u α , ε IACUi = Δ α /u α ; 4) The experimental calculated comparative uncertainty of each published study ε IACUi is compared with the selected comparative uncertainty ε CoP inherent in the model (Table 1), which describes the measurement of the physical constant.
It should be noted that this methodology also does not require consistent experimental results. From the point of view of its physical content, the IACU reflects the situation, how thoroughly all possible sources of uncertainties for a certain class of phenomena were identified and considered in calculations using different methods of measuring a specific physical constant (Section 3.2).
In the next section, we will present the results of applying the information approach to analyze the measurement data of various physical constants using different methods. In the proposed analysis, only publications are considered that contain data on the value of a physical constant, its relative, and standard uncertainties.

Boltzmann Constant
As an example of the visual step-by-step application of the information approach, we consider the results of measuring the Boltzmann constant using the method of an acoustic gas thermometer (CoP SI ≡ LMTθF). One of the many datasets can be found in [25], which consists of measurements taken in seven laboratories (Table 2) from 2009 to 2017.
We will apply IARU and IACU to calculate the estimated observation interval of k, S k , according to IARU, its values obtained in two projects were selected: k max = 1.3806508 × 10 −23 m 2 ·kg·s −2 ·K −1 [29] and k min = 1.3806484 × 10 −23 m 2 ·kg·s −2 ·K −1 [32]. Then where "−1" corresponds to the case where all the exponents of the base quantities are zero in Equation (9); 5 corresponds to the five base quantities L, M, T, Θ, and F; Δ LMTθF is the absolute uncertainty. The value of r LMTθF = 2.3 × 10 −7 calculated by IARU is in sufficient agreement with 6.0 × 10 −7 [30] and is much closer to 3.7 × 10 −7 [13]. This, first, confirms the legitimacy and appropriateness of using (6), and second, confirms the μ-rule, when the experimentally achieved relative uncertainty is always greater than that calculated by the information approach (Section 2.4). Furthermore, data introduced in Table 2 allows formulation of the following conclusions: 1) Although the authors of publications declared that they considered all the possible sources of uncertainty, the values of absolute and relative uncertainties can still differ by more than a factor of two. A similar situation exists in the spread of the values of comparative uncertainties (IARU). This reflects the existence of hidden uncertainties that have eluded the attention of researchers.
2) The results from the use of IACU indicate a relative agreement between the magnitude of the experimental comparative uncertainties and their significant discrepancy (more than 3 -4 times) compared with the recommended one (0.1331). This situation is explained by the fact that, on the one hand, research teams learn from each other in the search and elimination of undetected or unaccounted for uncertainties, thereby ensuring the relative uniformity of the magnitude of experimental comparative uncertainty. On the other hand, it should be considered that the idea of an acoustic gas thermometer method is based on the concept of an ideal gas, although the interaction between gas particles is not well understood. An additional difficulty is associated with measuring the molar concentration of gas per unit volume and volume itself with a competitive degree of accuracy. It should also be noted that the total volume includes the volume of the connecting pipes to the pressure gauges. Therefore, there may be significant unaccounted uncertainties due to both the formulation of the experimental model and the achievable accuracy of the values considered in the calculation. Moreover, the proximity of the acoustic mode to shell resonance leads to an unacceptably large degree of data violation for this mode. In addition, experimenters consider a much smaller number of variables compared with the recommended ones (see Table 1). These reasons lead to a large difference between the theoretically calculated comparative uncertainty and the experimental values of the comparative uncertainties achieved in measuring k.

Summarized Data
Because the step-by-step procedure for applying the information approach was described in detail in Section 3.1, generalized information on the data sets of measurements of the Planck constant, Boltzmann constant, Hubble constant, and gravitational constant is presented below (Table 3).
Looking closer at the data entered, we can make the following comments.   As stated in [46], the implementation of the measurement of h using the Kibble balance or the XRCD methods allowed us to achieve a consistent reliable value for the latest results. In addition, the calculated relative uncertainty does not exceed the uncertainty due to the current implementations of primary and secondary units of mass. However, given the r exp /r SI ratio (LMТI: 2.9, LMТF: 9.1), there is an urgent need to reduce the influence of sources of uncertainty for XRCD.
It should be noted that, in the framework of the information approach, the statement that "after the Planck constant is constant (an exact number with zero uncertainty...)" [47], is unacceptable because the relative uncertainty of measurement of the Planck constant always varies depending on the selected CoP inherent in the selected model.
2) The data from Table 3 clearly show that the minimum achievable relative uncertainties, r SI , calculated in accordance with the information approach, differ by two orders of magnitude for different methods of measuring the Boltzmann constant k! That is why, in the framework of the information approach, in contrast to the concept approved by CODATA, it is not recommended to determine and declare only one value of relative uncertainty when measuring the Boltzmann constant (and other constants) by various methods.
Using an information-oriented approach, both a respected scientist and a simple engineer can easily identify the advantages or disadvantages of a particular measurement method. Thus, analyzing the data of Table 3, it is obvious that the greatest success in achieving high accuracy of measurement of k in recent years was achieved using JNT and DBT, considering the smallest values of the ratio r exp /r SI (1.9) and (1.1). At the same time, the achieved experimental least relative uncertainty of 3.7 × 10 −7 , realized using DCGT, is doubtful. This is explained by the requirement of the μ-rule, according to which the theoretically calculated relative uncertainty (4.3 × 10 −7 ) is always less than the experimental one (3.7 × 10 −7 ). Therefore, researchers of [13] [38] should reanalyze all possible sources of uncertainty. Table 3, it is obvious that in measuring H 0 using BDL and BAO (CoP SI ≡ LMТ), the experimental relative uncertainties (0.01 [41] and 0.01 [43]) calculated according to IARU are many times greater than the recommended 0.00023 and 0.00018, respectively. This situation indicates that hidden variables are not considered and CoP SI ≡ LMT cannot be used in the future. Therefore, the conviction of scientists in accounting for all possible sources of uncertainties is far from providing a guarantee of achieving the true value of Н 0 by these two methods.

3) From
Following the logic of the information approach, it is again necessary to recognize that the method of measuring H 0 using the cosmic microwave background is the most promising, theoretically justified, and implements the most reliable experimental data. This conclusion can be confirmed by calculating the ratio ε SI /r exp considering the data in   Table 3 show that the experimental minimum relative uncertainties r exp exceed the recommended r SI by 43 and 56 times for BDL and BAO, although when measuring H 0 with CMB, r exp /r SI = 2.4. Because consistency is one of the basic requirements for analyzing results, the current situation needs to be explained. The information approach declares that the inevitable primordial absolute uncertainty of the model already exists in the process of developing a method for measuring a physical constant. That is why, when making predictions to increase the accuracy of the Hubble constant, great caution should be exercised. The fact is that with an increase in the number of observed space objects, according to most astronomers using various methods of calculating H 0 , absolute (ideal) statistical stability of the observed parameters and characteristics of any physical phenomena (real events, processes, and fields) is achieved. However, as was proved [48], the nonideal character of statistical stability (statistical a) The values of the minimum attainable comparative and relative uncertainties calculated according to the information approach depend on the choice of class of phenomena. Theory can predict their value. It is important to note that during the transition from the mechanistic model (LMТ) to CoP SI with a large number of base variables, the uncertainty increases. This is explained by a change in the number of potential interaction effects between an increased number of quantities that may or may not be considered by the researcher. b) You may notice large differences in the level of consistency between ε SI and B. Menin ε exp calculated according to IACU. This level can be called a "coefficient of consistency" for a physical constant measured by various methods. In particular, when measuring Н 0 , the ratio ε exp /ε SI is 710 (BDL) and 104 (BAO), while using CMB this ratio is only 4.1. A similar situation exists for measuring the gravitational constant: ε exp /ε SI = 100 when implementing mechanical methods, and ε exp /ε SI = 7.9 using electromechanical methods. At the same time, when measuring the Planck constant with KB and XRCD and using AGT and DCGT to calculate the Boltzmann constant, the values of the ε exp /ε SI ratios are very close to each other. As part of the information approach, this situation indicates that the BDL, BAO, and mechanical methods for G have limited use, and it can even be argued that they are not recommended for use. Moreover, using simple relationships calculated in accordance with a theoretically sound approach, we can draw very serious and far-reaching conclusions. It is important to emphasize once again that, using the IACU, researchers can find out for which method of measuring the physical constant it is necessary to continue the search for all possible sources of uncertainties. Thus, the ratio ε exp /ε SI is an objective criterion for assessing the achieved accuracy when comparing different methods of measuring one specific physical constant.

Data of
c) The introduction of comparative uncertainty, through the IARU, to evaluate the accuracy of measurements of physical constants allows the calculation of the r exp /r SI ratio. From the data in Table 3, a very obvious trend is clearly traced: models of measurements of physical constants with a small number of base quantities (LMT) and (LMTF) have clearly overestimated values of this ratio: 9.1, 12.7, 44, and 56. This is due to insufficient consideration of the effect of unaccounted base quantities and possible relationships between variables in calculating the value of the physical constant. At the same time, for models with a large number of base quantities, for example, LMTI or LMTθF, the r exp /r SI ratio varies from 0.9 to 2.9. Thus, in the framework of the information approach, we can consider the r exp /r SI ratio as a universal indicator of the achievements of scientists in measuring any physical constant using a variety of methods.

Discussion
Thanks to an amazing combination of information theory, which is strictly thought out and equipped with an excellent mathematical apparatus, with a carefully selected and verified database of experimental physics, it became possible to calculate the accuracy limit for measuring physical constants. This approach is realized without any statistical methods, weighted coefficients, and criteria of consistency.
Being unsatisfied with the statistical evaluation of measurements of physical constants, the author looked for an approach in which mathematical and logical difficulties are solved by simple definitions and calculations that are easy to understand. The author suspects that the information approach may also shed new light on old difficulties. It is generally taken for granted that if there is already a method that has been tested and accepted by the scientific community, there is nothing to look for better than good. You may need to look for descriptions where the situation is simpler. Perhaps this should not be much more complicated than SEM. One of the key concepts of the information approach is the application of the concept of complexity using the theory of information to the International System of Units, which is the result of the intellectual activity of scientists and does not exist in nature. We use the concept of complexity to measure the amount of information contained in the measurement model of a specific physical variable, and then use SI with seven base quantities to classify the classes of phenomena inherent in a particular measurement method. The proposed informational approach has the advantage that it takes into account both the physical nature of the experiment (a qualitative set of base quantities) and information content due to the specific number of variables taken into account in the model. In addition, the proposed measure of the proximity of the model to a real object (comparative uncertainty) can be used for any data set without requiring consistent results.
Comparative uncertainty is when traditional statistical methods used to process data sets of physical measurement results fail! Compared to the CODATA technique, the information approach has two obvious additional advantages. The first is that the information approach has the predictability property (studying the extent to which events can be predicted [49]). Today, CODATA uses LSA as the most preferred measure of predictability in a dataset for measuring physical constants. However, the calculations performed using the LSA (standard uncertainty of the predictive model) depend on the data set in which the results are presented [50]. Whereas an informational approach can handle any conflicting results. Secondly, the information approach also has the property of transparency [51], which is a key requirement for any information systems, including the process of modeling the measurement act. Any method for calculating the accuracy of a model with the necessary calculations should be available to engineers and scientists. The presented procedure for implementing the information approach using two methodologies is simple in perception and is easily implemented by a sufficiently qualified user.
The author notes that it is likely, at least philosophically, more acceptable, that the value of the relative uncertainty of the measurement of the physical constant is really clearly defined by a theoretically proven and simply implemented information method, as opposed to a statistical and expert assessment. It would be premature to argue that this would contradict, in everyday life, the Ockham principle (entities should not be introduced except when strictly necessary [52]), or in theory, when it is very difficult to avoid losing valuable information when describing the results of measurements of a physical constant using statistical methods. It is hard to imagine how these methods can be related to the real world. Unfortunately, the statistics are similar to experts who are witnesses in B. Menin court-they will testify in favor of either side. Supporters of SEM must solve one difficulty-this is the creation of a "correct" distribution of results. Currently, many SEMs have been proposed. Can the CODATA method be considered "true and impeccable" by refusing to consider an alternative? Of course, it would be wrong to deny that the CODATA method allowed the implementation of the new SI structure. It has at least as many parameters as necessary to determine the values of several fundamental physical constants. At the same time, the search for truth leaves the opportunity, on the one hand, for criticism, and on the other hand, for revealing new approaches.
From the above it follows that comparative uncertainty is inherent in any data sets when analyzing measurements of physical constants, which is an additional justification for clarifying standard practice. This uncertainty is always present and cannot be eliminated by standard data analysis, so measurements of physical constants may be misinterpreted in future experiments with greater accuracy.
When considering mathematical modeling of the process of measuring a physical constant, the question is whether physics should obey mathematical SEM or adhere more closely to observations and data [53]. According to the author, the information approach having a deep physical content, in particular, IARU, allows us to calculate with high accuracy the relative uncertainty, which is in good agreement with the CODATA recommendations but with a very short time. The fundamental difference between the proposed method and the existing CODATA statistical-expert methodology (in fact, all statistical methods are unreliable-some more and some less [54]) is that the information approach is theoretically justified without using any assumptions. It does not include such concepts as a statistically significant trend, aggregate consensus values, or statistical control, which are characteristic of a statistical-expert tool adopted in CODATA. We sought to show how the mathematical and, apparently, rather arbitrary expert formalism can be replaced by a simple, theoretically substantiated postulate about the use of information in measurements.
Thus, it turns out that the problem that researchers face in the process of calculating relative uncertainty, which allows us to confirm the true value of a physical constant, ultimately boils down to the problem of choosing a model of the class of a phenomenon for the measurement process. With this formulation of the question, limitations arise due to the human mind, namely the knowledge, experience, and intuition of the researcher. The elimination of such limitations, as we have seen, can be successfully implemented using the information approach, which can be considered the main tool for assessing the accuracy of measuring a physical constant.

Conclusions
In this study, we presented the possibility of applying the concept of information to the problem of assessing the accuracy of measuring a physical constant. One of the important conclusions is that the amount of information in the model is the key to understanding the physical meaning of the threshold mismatch between the result of the experiment and the mathematical representation of the measurement process. This conclusion is consistent with the idea that the fundamental task of evaluating calculation accuracy is to select a channel for transmitting information through a model that developers choose in accordance with their experience, knowledge and intuition. The choice of the structure of the model and its class of phenomena leads to a situation where there is an inevitable measurement uncertainty. Researchers can no longer ignore or eliminate it, since future studies on the measurement of physical constants may incorrectly interpret the results.
A reliable, information-oriented, theoretically substantiated approach is proposed for calculating the relative uncertainty when measuring a physical constant. This approach uses the comparative uncertainty inherent in any measurement model of the measurement process, the value of which is due to a qualitative set of base quantities and the total number of derived variables. The approach is not based on the assumption of a Gaussian distribution and is applicable to the analysis of results obtained both for a long and a short period of time.
Calculated in accordance with the IARU for CoP = LMTθF, the relative uncertainty (2.3 × 10 −7 ) of the Boltzmann constant measurement using the acoustic gas thermometer method is close to the smallest achieved experimental uncertainty of 3.7 × 10 −7 [13] and recognized by CODATA. This confirms the µ-rule, according to which the experimentally achieved relative uncertainty is always greater than that calculated using the information approach (Section II.4). It should be noted that the calculation of r SI is carried out for a very short period of time, incomparably smaller than by the CODATA method. The ratio of the minimum achieved experimental comparative uncertainty to the theoretically calculated one revealed the unsuitability of using a model with a small number of base quantities (LMТ and LMТF) for measuring the Planck constant, the Hubble constant, and the gravitational constant. The ratio ε exp /ε SI is an objective criterion for assessing the achieved accuracy when comparing various methods of measuring one specific physical constant.
When using models with a large number of base quantities, for example, LMTI or LMTθF, the ratio of the minimum experimental relative uncertainty achieved to the theoretically calculated r exp /r SI varies from 0.9 to 2.9, which indicates the suitability of these methods for measuring physical constants. At the same time, r exp /r SI varies from 9 to 56 for models with a low number of base quantities, which is unacceptably high for practical use. Thus, in the framework of the information approach, r exp /r SI can be considered as a universal metric for assessing the practical level of accuracy when measuring any physical constants using various methods.
It should be noted that the application of the information approach allows us to make a very non-trivial conclusion: when measuring physical constants using various methods, it is not recommended to state only one value of relative uncertainty.
The author understands that the stated conclusions can be hardly accepted by part of the scientific community, since they do not fit into the generally accepted point of view. However, the author hopes that readers will find the time and desire to identify possible contradictions or fundamental shortcomings of the proposed method. At the same time, the presented results do not in any way abolish the basic principles of measurement theory, which always remain valid, but must be used separately at the further stage of the model implementation.
A rigorous analysis of the data presented, confirmed by numerical results, shows that the proposed method is not only reliable and robust, but also effective. The results of the study do not reject the possibility of applying an information-oriented approach to the calculation of the relative uncertainty in the measurement of physical constants, and the constantly obtained new evidence is exclusively in its favor.
In this time of uncertainty, it is very important to understand the origins of the human "fuzzy" perception of the world around us. According to the author, it is the information-theoretic approach that allows us to understand the physical reasons why we, whether we want it or not, see the object under study in the "fog" of errors and doubts.

Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.