Can Information Theory Help to Formulate an Optimal Model of a Physical Phenomenon?

Abstract

This article deals with the problem of calculating the comparative uncertainty of the main variable in the model of the studied physical phenomenon, which depends on a qualitative and quantitative set of variables. The choice of variables is determined by preliminary information available to the observer and dependent on his knowledge, experience and intuition. The finite value of the amount of information available to the researcher leads to the inevitable aberration of the observed object. This causes the existence of an unremovable and intractable processing by any statistical methods, a comparative (respectively, relative) uncertainty of the model. The goal is to present a theoretical justification for the existence of this uncertainty and proposes a procedure for its calculation. The practical application of the informational method for choosing the preferred model for the Einstein formula and for calculating the speed of sound is demonstrated.

Share and Cite:

Menin, B. (2022) Can Information Theory Help to Formulate an Optimal Model of a Physical Phenomenon?. Journal of Applied Mathematics and Physics, 10, 2940-2950. doi: 10.4236/jamp.2022.1010197.

1. Introduction

Intuition, life experience and knowledge (the human paradigm ref. [1]) have creative power because they create those prisms through which we look at the world. Formulating a model of the phenomenon under study, the researcher creates a picture that is unique to him and reflects his philosophical view of the order of things that exist regardless of his desire. How accurately the observed process is recreated can only be clarified by carrying out accurate measurements.

Any experiment is preceded by the process of building a model of the phenomenon being studied by the observer. The great Newton used three basic variables to formulate laws: length (m), mass (kg) and time (sec). Only in the 18th-19th centuries scientists proposed and began to use other basic variables: I—unit of electric current—ampere (A), θ—unit of thermodynamic temperature—kelvin (K), F—amount of substance (mol), J—unit of luminous intensity—candela (cd). On the basis of various variables, systems of units were built: SI, British, Planck system, CGS. However, scientists approached the idea of the need to use the system of units only in the 20th century. Despite the fact that systems of units were created by human genius and did not exist in nature, they are an indisputable tool for understanding the world around us. When formulating a model and choosing certain variable, the researcher is “doomed” to use any system of units.

Depending on the subject area under study, the problem being solved, on the mathematical background of the researcher and the requirements of the customer, mathematical models can have various forms and ways of representation. The model plays a system-forming and sense-forming role in scientific cognition; it allows us to understand the phenomenon and the structure of the object under study. Without building a model, it is unlikely to be able to understand the logic of the system and conduct a high-precision experiment. This means that the model, from the position of a particular researcher, allows you to decompose the system into elements, connections, mechanisms, explains the operation of the system and determines the cause-and-effect relationships and the nature of the interaction of the variables taken into account. In the simplest case, modeling technology involves at least three steps: 1) formalization, 2) actual modeling, 3) interpretation. A huge number of studies are devoted to the second and third stages ref. [2]. This article will consider only the process of formalization (identifying the essential features of the object under study and describing the relationships between them using the language of mathematics ref. [3]), which, in the opinion of the author, has not received sufficient attention. This is also due to the fact that it is at this stage that the minimum total costs can be achieved, including the damage from its application (inaccuracy) and significant resources spent (time, cost, reasonable complexity) for constructing the model. It is equally important to note that the formalization stage determines the choice of the required measuring complex and the technology for implementing the experiment.

2. Initial Background

In the process of model formalization, there are two entities: the system of units like the International System of Units (SI), which “provides” the model with information about the phenomenon under study using dimensional and dimensionless variables, and the researcher, who takes into account all available information and selects certain variables from the SI using intuition, knowledge and experience to analyze the uncertainty of the model. In relation to the formalization process, this may mean that a certain likelihood function is created in the researcher’s brain: based on the data of his previous experience, he builds a certain model construction that most likely corresponds to his philosophical views—predictions are based on the past.

Given the above and taking into account the current interest in the applications of information theory in various types of human activity ref. [4] [5] [6], some researchers recognize that it is possible to consider the model as an information channel between the observed object and the researcher ref. [7].

In this case, you can represent the input data in the form X{x1, ..., xj}, as the total number of variables (hereinafter referred to as variables with finite information, FIQ ref. [8]) inherent in the observed physical object. The number of variables in X is equal to the number of FIQs contained in the system of units used by the researcher; for example, SI, μSI = 38.265 ref. [9]. Y{y1, ..., yp}—output data, the number of FIQs included in the model at the will of the observer. Y is a “noisy” version of X, in which the actually observed phenomenon “shrinks” in size, i.e., the number of FIQs is significantly reduced, but without energy consumption. This is explained by the fact that at the stage of formalization, the observer only considers the model and does not introduce any interference into the real process. Given that μSI is a constant, each FIQ has a limited amount of information ref. [7] and the number of FIQs in the model is always limited; it can be concluded that the number the information contained in the SI and in the model is limited.

No less important is that at the stage of formalization, a conscious observer selects for the model only some of the base variables characteristic of each system of units. So, for SI, the basic variables are: L—length, M—mass, T—time, Θ—thermodynamic temperature, I—electric current, J—light intensity, F—amount of substance ref. [10]. In May 2019, an old dream of metrologists came true: finally, all international units were defined using physical constants. The selected set of basic variables characterizes the class of phenomenon (CoP) to which the model belongs. CoP is a set of physical phenomena and processes described by a finite number of base quantities and derived variables that characterize certain properties of a material object from a qualitative and quantitative point of view ref. [11]. For example, when modeling heat and mass-exchange processes, variables are usually used with a dimension that includes the basic SI variables: length—L, mass—M, time—T, and thermodynamic temperature θ; that is, the model belongs to the class of phenomena CoPSI ≡ LMTθ. Newton, not being familiar with the current version of the SI, chose CoPSI ≡ LMT for the law of gravity. Then (choice of CoP) the number of variables decreases sharply compared to µSI. In other words, due to limited time, financial resources, and computational power, the researcher ends up selecting a very small number of variables in the model compared to μSI. In relation to the model, this leads to the fact that the perception of the observed process is distorted. Therefore, it can be concluded that part of the information is lost during modeling due to the subjective thinking of the observer.

Combining the complexity of SI (number of variables and possible links between them) with the definitions and apparatus of the information theory allows us to calculate the lowest absolute uncertainty of the physical phenomenon model caused by the choosing of the qualitative-quantitative set of variables ref. [4]:

Δ = S [ ( z β ) / μ + ( z β ) / ( z β ) ] (1)

where

- Δ is the a priori model absolute uncertainty (systematic effect ref. [12]) caused by choice of the CoP and the number of recorded FIQs, S is the interval of the observation of the main researched FIQ chosen by the observer;

- z' is the number of FIQs in the selected CoP, β' is the number of base quantities in the selected CoP, z" is the number of FIQs recorded in a model, and β" is the number of independent quantities recorded in a model;

- μ is the number of dimensionless FIQs that can be constructed based on the seven base SI quantities, μ = 38,265 ref. [9].

The concept of “relative uncertainty” r is clear to all scientists and is widely used in science and technology. However, r can only serve to reflect a subjective judgment ref. [13]. In addition, r does not imply the need to indicate the results of measurements and at the same time, consider the measure of confidence in them in the form of an interval within which most of the distribution of the values of the measured variable lies. Therefore, as a universal indicator for a quantitative assessment of the proximity of the model to the object under study, the comparative uncertainty of the model ε = Δ/S is proposed. Until now, researchers have not considered the value of ε, although it is of vital importance in information theory ref. [14]. Equation (1)—the “ε-equation”—applies to models that use both dimensional and dimensionless FIQs ref. [15] [16]. It should be noted that ε cannot be statistically tested using tools such as consistency, asymptotic normality, weighted estimates, or coefficients.

Following the proposed FIQ-based approach, the results of scientific research are recommended to analyze from the perspective of comparing the comparative uncertainty achieved in the model εmod with the theoretically justified εopt (Table 1). The ratio εmod/εopt is an objective criterion for establishing the acceptability of a particular model, measurement method and assessing the accuracy achieved when comparing different models for one specific physical phenomenon or technological process. The similarity between these two uncertainties (1 > εmod/ εopt ≈ 1) proves the applicability of the proposed model in describing the process being studied. Conversely, a significant difference between these uncertainties indicates that the proposed model is unreliable. Importantly, there is no guarantee that this limit (εopt) will ever be reached (εmod is always less than εopt), regardless of the advances made by scientists and engineers. The following analysis of research results will highlight the obstacles that must be bypassed or overcome before the various objectives can be achieved.

This approach indicates the possibility of using the concept of information transfer, accumulation, and transformation directly in information theory, as

Table 1. Comparative uncertainties and optimal number of dimensionless criteria.

well as in theoretical studies in other areas of knowledge, including applied problems.

Using data of ref. [4], the optimal values of εopt can be identified for different CoPs and the recommended number of FIQs corresponding to each CoP, as shown in Table 1.

Analyzing the data from Table 1, it should be noted, firstly, that with an increase in the number of basic variables in CoP, the optimal number of variables used in the model and necessary to achieve the optimal value of comparative uncertainty sharply increases. Secondly, the use of a model with a CoP containing a small number of basic variables, for example, LMT or LMTF, makes it impossible to achieve the optimal value of comparative uncertainty. This indicates the inexpediency of using a model with a low number of basic variables to describe the physical object under study.

3. Applying FIQ-Based Approach

3.1. Light Speed

Suppose that within the framework of the presented informational method, the researcher sets out to “improve or modify” Einstein’s formula.

E = m c 2 (2)

where E is energy, m is the mass of the object, c is the speed of light, c = 299.792.458 m·s–1 ref. [17]. The formula corresponds to CoPSI ≡ LMT.

It must be said that, at present, measurements using this formula are carried out with a relative error of 4 × 10−7 ref. [18]. Despite these results, this does not mean that Einstein’s ideas will always be true. Future physicists will no doubt test it even more precisely because more precise tests imply that our theory of the world is, in fact, becoming more and more perfect.

Let’s imagine that from the standpoint of a researcher who proceeds from his own philosophical views, the temperature of the environment surrounding the object should be taken into account. It should be noted that the expediency of taking into account temperature in the relativistic theory was noted in ref. [19].

Taking into account the “Landauer limit” ref. [20] and the results of ref. [21], in which it was proved that the amount of information of any physical system must be finite if the space of the object and its energy are finite, can be proven ref. [22] [23] that

ϒ E E + ϒ I = ( 2 π R E k b θ ) / ( c ) (3)

where ϒE is the energy of the object contained in the sphere of radius R in terms of ordinary energy (m2·kg·s–2) and includes the total mass-energy of the observed sphere E and the energy due to the information ϒI contained in the object, ħ is the reduced Planck constant, kb is the Boltzmann constant, θ is the ambient temperature. The formula corresponds to CoPSI ≡ LMTθ.

Let us analyze (2) and (3) from the point of view of comparing the comparative uncertainty εmod achieved in the model with the theoretically substantiated εopt. Considering the dimensions of FIQ in (2), the problem belongs to CoPSI ≡ LMT, and we can assume that z''β'' = 1 (according to the π-theorem ref. [24].

For (2) the achieved comparative uncertainty equals

ε 1 = 91 / 38265 + 1 / 91 = 0.0134 (4)

where z'β' = 91 is the number of dimensionless complexes for CoP ≡ LMT and εLMT = 0.0048 (Table 1). ε1/εLMT = 2.79.

Taking into account the dimensions of FIQ in (3), the problem refers to CoPSI ≡ LMTθ, and we can assume that z''β'' = 3 (according to the π-theorem ref. [24]. For (3) the achieved comparative uncertainty equals

ε 2 = [ ( 846 / 38265 ) + 3 / 846 ] = 0.0256 (5)

where z'β' = 846 is the number of dimensionless complexes for CoPSI ≡ LMTθ and εLMTθ = 0.0442 (Table 1). ε2/εLMTθ= 0.58.

Analyzing ε1/εLMT=2.79 and ε2/εLMTθ= 0.58, we can draw the following conclusions. Although the value of 0.58 is much less than 1, from the point of view of the informational method, the inclusion of temperature in Einstein’s Equation (3) is perhaps an advance in understanding the nature of the surrounding world. Undoubtedly, Einstein’s formula is considered the greatest achievement of the 20th century, embodies the idea of simplicity and depth of scientific thought, is consistent with a set of known experimental results, and also allows you to make predictions on obtaining new scientific data. However, given that ε1/εLMT = 2.79 > 1, it can be assumed that the theory of relativity can be based on the subjective philosophical view of the researcher at the most fundamental level, which raises deep epistemological questions about the fundamental nature of reality. At the same time, the informational approach is designed to reveal the smallest deviations from the generally accepted principles of modeling physical phenomena, which can give the first signs of new physics.

3.2. Sound Speed

A large number of studies have been devoted to the calculation of the speed of sound in various media. In particular, ref. [25] the first data of measurements of the speed of sound for hydrogen chloride in liquid and dense vapor phases are presented. Based on these measurements and other types of data from the literature, a fundamental equation of state for hydrogen chloride was developed. This equation is formulated in terms of the Helmholtz energy and can be used to calculate all thermodynamic properties through combinations of the function itself and derivatives with respect to its natural variables.

The speed of sound was considered as a function of temperature, pressure, path length difference and time. Thus, the model for measuring the speed of sound refers to CoPSI ≡ LMTθ. There was mentioned that because the speed of sound measurements for hydrogen chloride in the liquid and dense vapor phases are the first data for this property in the literature, they are important for modeling caloric data. The accuracy of the equation was analyzed by extensive comparisons to experimental data. Furthermore, the physical and extrapolation behavior of the equation of state was carefully monitored, which is an important aspect for the application to mixture models.

The speed of sound was considered as a function of temperature (θ), pressure (p), path length difference (l) and time (T). Thus, the sound velocity measurement model refers to CoPSI ≡ LMTθ. We can assume that z''β'' = 1 (according to the π-theorem ref. [24]. For this case, theoretically achievable comparative uncertainty equals:

ε 3 = [ ( 846 / 38265 ) + 1 / 846 ] = 0.0233 (6)

where z'β' = 846 is the number of dimensionless complexes for CoPSI ≡ LMTθ and εLMTθ = 0.0442 (Table 1). In this way,

ε 3 / ε LMT θ = 0.53 (7)

In contrast to the previous example, suppose that a group of researchers set out to reveal the deep physical nature of the speed of sound. So, scientists proved ref. [26] that the upper limit of the speed of sound in condensed phases depends on the combination of two important dimensionless fundamental constants, the fine structure constant α and the ratio of the masses of an electron to the mass of a proton:

v u / c = α ( m e / ( 2 m p ) ) 1 / 2 (8)

where c is the speed of light in vacuum, me is the electron mass, mp is the proton mass.

The proposed formula (8) was obtained as a result of processing an extensive set of experimental data. In addition, theoretical calculations were compared with a database of experimental data for 36 elemental solids, including semiconductors and metals with high binding energies. The authors compared theoretical calculations and experimental results and concluded that the difference between them is within acceptable limits. The results of this study ref. [26] certainly expand the current understanding of how fundamental constants can impose new boundaries on important physical properties.

The research was carried out in the framework of CoPSI ≡ LMTIF. It means that the dimensions used as the dimensions of the bases L, M, T, I, and F, in different degrees ref. [11]. A total of 22 (z") variables were used to calculate vu. For the case involving the selection of four independent variables (β" = 4), in accordance with the π-theorem ref. [24], the number of dimensionless criteria in a model, γmod, equals γmod = z" β" = 18.

From (8), as well as the number of FIQs inherent in CoPSI, γCoP = z' β' = 1412 for the established CoPSI ≡ LMTIF (Table 1), the achieved comparative uncertainty of a model εLMTIF, can be calculated as:

ε 4 = [ 1412 / 38265 + 18 / 1412 ] = 0.0596 (9)

Upon comparing ε4 (9) and εLMTIF = 0.0738 (Table 1), ε4/εLMTIF ≈ 0.8 (εmod is closed to εLMTIF) is obtained. This is owing to the difference in the number of variables considered in the model γ = 18 and the recommended γopt = 52 (Table 1). Unfortunately, in that study ref. [26], the authors did not indicate the ranges of variation and the measurement of uncertainty for each considered variable. Additionally, the total absolute uncertainty of the key parameter (the speed of sound in condensed phases) was not calculated. This information allows for the possibility of comparing the theoretical comparative uncertainty, εopt, calculated using (1), with the experimental comparative uncertainty, ε4.

Despite the lack of knowledge about the proposed information method, the study authors presented a very plausible model and result comparable to those obtained in ref. [27], in which researchers reached εmod/εopt ≈ 0.9 with 130 variables selected in the model.

In another study ref. [28] the performance of three established reference thermodynamic models based on the Helmholtz energy for binary mixtures (N2 + H2) is evaluated by comparing experimental sound speed data and acoustic virial coefficients with the results predicted by these reference thermodynamic models. In addition, this study provides new and more accurate experimental data, which form the basis for improving these models.

The achieved relative uncertainty is 220 × 10−6 (0.022%). The number of variables used in the model for calculating the speed of sound is 10. In accordance with the dimensions of the variables, the class of the phenomenon refers to CoPSI ≡ LMTθ. Using reasoning similar to the two previous examples, we can get

ε 5 / ε LMT θ = 0.69 (10)

Taking into account (7), (9), (10), it follows from the above reasoning and calculations that the idea that simplicity (a small number of variables in the constructed model that reflects the observed object from the point of view of the researcher) is the path to truth turns out to be far from reality. It is precisely this characterization of simplicity in the hitherto discovered laws of nature that it would be erroneous to generalize since it is obvious that simplicity was one of the reasons for their discovery and, therefore, cannot serve as a basis for supposing that other undiscovered laws are just as simple.

Thus, the achieved maximum accuracy of the representation of the observed physical phenomenon actually depends on the qualitative and quantitative set of variables in the model.

4. Discussion

Since 1927, Heisenberg’s uncertainty principle has put an end to dreams of a fully knowable world and the hope that only human ingenuity limits the accuracy of measuring things. Since 2017 ref. [9] it became apparent that the progress toward more accurate measurements is also limited not only by quantum uncertainty but also by human consciousness. This is explained by the fact that when building a model that precedes any experiment, a conscious observer uses (forced, obliged) some system of units, including a finite qualitative-quantitative set of variables. This, in turn, determines the non-infinitely small amount of information contained in the model. Thus, initially, before conducting any experiment designed to confirm or refute the proposed model of nature, the researcher is doomed to get a distorted (blurred) picture of the world. This limit is greater than the uncertainty dictated by the Heisenberg inequality.

In the theoretical modeling of physical phenomena, the decisive step is to determine those variables that allow you to get an approximate (through human thinking) but an informative idea about it. In some cases, this choice can be made based on the intuition and experience of scientists and engineers for many complex systems. We present, using thermodynamic information theory, a methodology for determining the appropriate accuracy of a model of a physical phenomenon based on the calculation of its information content. We use the informational approach to identify the class of any physical or technological phenomenon under study and the optimal number of variables to achieve its most plausible representation that satisfies the researcher’s philosophical vision. Thus, by means of the informational method, a bridge is built between the consciousness of the observer, the object of study and such a still not fully known structure as information.

The importance and diversity of the processes and phenomena under study require a much greater role from testing models than is customary in the physical and technical sciences. The informational approach offers a flexible approach to testing. We emphasize the need to use comparative uncertainty, which does not require the consistency of numerical data.

5. Conclusions

The results obtained can help reveal the features of the subjective experience of the researcher and provide information for theoretical debates about the influence of the observer’s consciousness on the achievable accuracy of the model.

Despite the difficulties and limitations discussed in this article, a theory based on the analogy between the understudy object-observer and the communication channel can be considered productive and useful in the study of complex physical objects.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Covey, S.R. (2009) The 7 Habits of Highly Effective People: Restoring the Character Ethic. Simon & Schuster, New York.
[2] Ejiko, S.O. and Filani, A.O. (2021) Mathematical Modeling: A Useful Tool for Engineering Research and Practice. International Journal of Mathematics Trends and Technology, 67, 50-64.
https://doi.org/10.14445/22315373/IJMTT-V67I9P506
[3] Vernat, Y., Nadeau, J.H. and Sebastian, P. (2009) Formalization and Qualification of Models Adapted to Preliminary Design. International Journal on Interactive Design and Manufacturing, 4, 11-24.
https://doi.org/10.1007/s12008-009-0081-9
[4] Menin, B. (2022) Simplicity of Physical Laws: Informational-Theoretical Limits. IEEE Access, 10, 56711-56719.
https://doi.org/10.1109/ACCESS.2022.3177274
[5] Menin, B. (2019) Precise Measurements of the Gravitational Constant: Revaluation by the Information Approach. Journal of Applied Mathematics and Physics, 7, 1272-1288.
https://doi.org/10.4236/jamp.2019.76087
[6] Menin, B.M. (2019) The Problem of Identifying Possible Signals of Extra-Terrestrial Civilizations in the Framework of the Information-Based Method. Journal of Applied Mathematics and Physics, 7, 2157-2168.
https://doi.org/10.4236/jamp.2019.710148
[7] Menin, B. (2021) Construction of a Model as an Information Channel between the Physical Phenomenon and Observer. Journal of the Association for Information Science and Technology, 72, 1198-1210.
https://doi.org/10.1002/asi.24473
[8] Del Santo, F. and Gisin, N. (2019) Physics without Determinism: Alternative Interpretations of Classical Physics. Physical Review A, 100, Article ID: 062107.
https://doi.org/10.1103/PhysRevA.100.062107
[9] Menin, B. (2017) Information Measure Approach for Calculating Model Uncertainty of Physical Phenomena. American Journal of Computational and Applied Mathematics, 7, 11-24.
http://article.sapub.org/10.5923.j.ajcam.20170701.02.html
[10] Newell, D.B. and Tiesinga, E. (2019) The International System of Units (SI).
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.330-2019.pdf
[11] Sedov, L.I. (1993) Similarity and Dimensional Methods in Mechanics. CRC Press, Boca Raton.
[12] Pavese, F. (2010) Comparing Statistical Methods for the Correction of the Systematic Effects and for the Related Uncertainty Assessment. Journal of Physics: Conference Series, 238, Article ID: 012041.
https://doi.org/10.1088/1742-6596/238/1/012041
https://www.researchgate.net/publication/349493589
[13] Henrion, M. and Fischhoff, B. (1986) Assessing Uncertainty in Physical Constants. American Journal of Physics, 54, 791-798.
https://doi.org/10.1119/1.14447
[14] Brillouin, L. (1953) Science and Information Theory. Academic Press, New York.
[15] Menin, B. (2019) Hubble Constant Tension in Terms of Information Approach. Physical Science International Journal, 23, 1-15.
https://doi.org/10.9734/psij/2019/v23i430165
[16] Menin, B. (2018) h, k, NA: Evaluating the Relative Uncertainty of Measurement. American Journal of Computational and Applied Mathematics, 8, 93-102.
http://article.sapub.org/10.5923.j.ajcam.20180805.02.html
[17] Mohr, P.J., Newell, D.B., Taylor, B.N. and Tiesinga, E. (2018) Data and Analysis for the CODATA 2017 Special Fundamental Constants Adjustment. Metrologia, 55, 125-146.
https://doi.org/10.1088/1681-7575/aa99bc
[18] Simon, R., et al. (2005) World Year of Physics: A Direct Test of E=mc2. Nature, 438, 1096-1097.
https://www.nature.com/articles/4381096a#citeas
https://doi.org/10.1038/4381096a
[19] Blinov, N., et al. (2022) Realistic Model of Dark Atoms to Resolve the Hubble Tension. Physical Review D, 105, Article ID: 095005.
https://doi.org/10.1103/PhysRevD.105.095005
[20] Landauer, R. (1961) Irreversibility and Heat Generation in the Computing Process. IBM Journal of Research and Development, 5, 183-191.
https://sci-hub.hkvisa.net/10.1147/rd.53.0183
https://doi.org/10.1147/rd.53.0183
[21] Bekenstein, J.D. (1981) A Universal Upper Bound on the Ratio of Entropy to Energy for a Bounded System. Physical Review D, 23, 287-298.
https://doi.org/10.1103/PhysRevD.23.287
[22] Menin, B. (2019) On the Possible Ratio of Dark Energy, Ordinary Energy and Energy due to Information. American Journal of Computational and Applied Mathematics, 9, 21-25.
[23] Menin, B. (2019) Is There a Relationship between Energy, Amount of Information and Temperature? Physical Science International Journal, 23, 1-9.
https://doi.org/10.9734/psij/2019/v23i230148
https://journalpsij.com/index.php/PSIJ/article/view/569
[24] Yarin, L.P. (2012) The Pi-Theorem: Applications to Fluid Mechanics and Heat and Mass Transfer. Springer, Berlin.
https://sci-hub.hkvisa.net/10.1007/978-3-642-19565-5
https://doi.org/10.1007/978-3-642-19565-5
[25] Thol, M., Dubberke, F.H., Baumhögger, E., Span, R. and Vrabec, J. (2018) Speed of Sound Measurements and a Fundamental Equation of State for Hydrogen Chloride. Journal of Chemical & Engineering Data, 63, 2533-2547.
https://doi.org/10.1021/acs.jced.7b01031
[26] Trachenko, K., et al. (2020) Speed of Sound from Fundamental Physical Constants. Science Advances, 6, eabc8662.
https://doi.org/10.1126/sciadv.abc8662
[27] Bose, D., Palmer, G.E. and Wright, M.J. (2006) Uncertainty Analysis of Laminar Aeroheating Predictions for Mars Entries. Journal of Thermophysics and Heat Transfer, 20, 652-662.
https://doi.org/10.2514/1.20993
[28] Segovia, J.J., Lozano-Martin, D., Tuma, D., Moreau, A., Carmen Martín, M. and Vega-Maza, D. (2022) Speed of Sound Data and Acoustic Virial Coefficients of Two Binary (N2 + H2) Mixtures at Temperatures between (260 and 350) K and at Pressures between (0.5 and 20) MPa. The Journal of Chemical Thermodynamics, 171, Article ID: 106791.
https://doi.org/10.1016/j.jct.2022.106791

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.