_{1}

^{*}

In our time, experimental physicists have obtained data on a very large number of phenomena and objects of the physical world. Very rarely there is a situation when theoretical physicists do not have enough experimental data to understand some known fundamental law of Nature. This situation arose almost a hundred years ago and sparked a discussion between A. Einstein and N. Bohr on the probabilistic nature of microcosm phenomena. From the time, it seemed that most physicists are inclined to believe that the proponents of a quantum explanation of the randomness of the phenomena of radioactive decay are right. Now this problem has been solved experimentally. The results of these measurements [1] show that A. Einstein and other proponents of determinism were right. In most cases, theoretical models are based on some already existing experimental data and are intended to explain them. At the same time, in the twentieth century, among microscopic, well-mathematically based models, there were several that raise doubts about their correctness, since they cannot explain a number of other experimental data that can be attributed to the fundamentally important properties of the studied objects [2] [3]. Therefore, the usual criterion for the correctness of the theory, which consists of its agreement with the measurement data, is ambiguous in this case. An additional criterion for the correctness of a microscopic theory can be formulated if it is assumed that the microscopic theory must be quantum one. The coefficients of quantum equations are world constants. Therefore, the solutions of these equations must be equalities made up of world constants only. For this reason, a correct microscopic model must rely on equalities consisting of world constants only. This criterion is shown to work successfully for models of superfluidity and superconductivity, for models of a number of particles, and models of the star interior.

Einstein’s statements in the epigraph can be attributed to a number of theories created by physicists in the twentieth century.

The twentieth century is a thing of the past. Now it is time to critically rethink some theories created by physicists during this period.

The need for this reinterpretation arises from the fact that theoretical physicists in the past century often are considered the most exciting and important task to build theoretical models for those phenomena and objects for which there was not yet enough experimental data collected for their unambiguous interpretation. To create such theories, in addition to knowledge, they needed intuition and a rich imagination. Therefore, the reliability of such models needs experimental confirmation, as required by the main principle of natural Science.

The postulate, which eventually became the main principle of the natural Sciences, was formulated more than 400 years ago by William Gilbert (1544-1603) [

Its wording is simple:

All theoretical constructions that claim to be scientific must be tested and confirmed experimentally.

Today, Gilbert’s postulate has become a basic principle of physics and experimental physics has created a reliable foundation for a building of theoretical models.

Only occasionally can occur ambiguous situation due to the fact that direct experiments do not indicate their decision.

This was the reason for the long-term debate of physicists at the beginning of the last century, who discussed the stochastic nature of microcosm phenomena.

The question of the probabilistic nature of radioactive decay arose immediately after the discovery of this phenomenon.

Anti-determinists led by N. Bohr considered this decay as a purely random quantum mechanical tunneling phenomenon.

But the proponents of determinism in physics did not agree with this explanation. Einstein rejected the probabilistic interpretation of natural phenomena. In a letter to M. Born (1926) he wrote: “at any rate, I am convinced that the Lord God does not roll the dice.”

Still, the anti-determinists prevailed. The immaculate logic of the mathematics of the quantum-mechanical apparatus won over public opinion to their side.

Currently, the physical community generally believes that radioactive decay is a truly random process.

However, this problem should not be decided by voting.

In accordance with the Gilbert’s principle, the solution is possible only on the basis of experimental data.

Einstein and his associates (for example, N. Tesla) believed that the cause of radioactive decay could be the impact of the unknown at the time external causes.

The neutrino flux fits the description of such external causes very well.

Therefore, it is natural that the assumption that the cause of beta-decay of radioactive nuclei may be their interaction with the neutrino flux, has been repeatedly expressed earlier by various researchers [

One can test this hypothesis by examining the reaction of a beta-source to changes in the neutrino flux incident on it. We can’t reduce the cosmic neutrino flux, but we can increase it by adding the neutrino flux from the nuclear reactor.

The experiment used the IBR-2 pulse reactor (Dubna, Russsia) [

This reactor, after a short burst of activity, created a pulsed neutrino stream due to the beta-decay of fission fragments of nuclear fuel. Therefore, this neutrino flux decreased exponentially after each reactor flash.

The experiment [^{63}Ni. This source was located next to the reactor and was carefully protected from the effects of reactor neutrons and gamma-quanta. The isotope ^{63}Ni is characterized by having a very small energy of beta-electrons.

The result of these measurements is shown in

The effect of the same stream of reactor neutrinos on the beta-source ^{90}Sr/^{90}Y, which has almost a couple of orders of magnitude more beta-electron energy, was significantly weaker (

From the obtained measurement data, it can be concluded that, as suggested by A. Einstein, the phenomenon of beta-decay is not a random phenomenon: “quantum mechanics speaks volumes, but it doesn’t bring us any closer to solving the mystery of the Creator”.

The first clarification was formulated shortly after Gilbert. It boils down to the statement that a scientific theoretical model should not have fundamentally immeasurable parameters inside it.

To a certain extent, this was a response to medieval ideas about angels. The very existence of angels was not questioned by anyone at that time. But they have attributed the property of complete undetectability, i.e. a kind of confinement, similar to that which was introduced into the theoretical concepts of non-observable quarks, which nevertheless have a well-defined fractional charge.

Another important clarification is due to the fact that in our time, new theoretical constructions are created on the basis of any experimental facts and therefore automatically agree with them.

However sometimes, they do not provide an explanation for a number of other experimental data that can be attributed to the fundamentally important properties of the studied objects [

Therefore, to test the correctness of some modern theoretical model, it is necessary to formulate an additional criterion that gives it an assessment from a fundamentally different point of view.

In applied physics, phenomenological theories play an important role, but this is not about them.

To understand the essence of physical objects and phenomena, it is necessary to develop fundamental theories that give them a microscopic theoretical description.

Due to the fact that these objects of the microcosm obey quantum laws, modern microscopic theories must be formulated in terms of quantum mechanics and its rules. Therefore, some basic formulas in modern microscopic theories must be expressed in ratios of world constants only.

There are no other solutions to the equations of quantum physics.

As an example, we can consider the model of the Bohr atom, in which all the main parameters are expressed only by world constants.

Of course, one can’t put an identity mark between microscopic theory and quantum mechanical one. There may be exceptions. So a microscopic theory of Brownian motion should not be quantum. However, in the vast majority of cases, these two terms can be considered to coincide.

Therefore the modern formulation of requirements for microscopic physical theory must take this into account:

A correct microscopic theory must rely on basic relationships that consist of world constants only and are supported by measurement data.

Based on this formula, we can analyze the theoretical models of the twentieth century in order to determine the correctness of their understanding of the nature of the phenomena they study.

Modern microscopic theories of superfluidity and superconductivity are provided with well-developed mathematical justifications. Their authors were repeatedly awarded Nobel prizes. However, these theories do not satisfy the Gilbert principle in its modern formulation, since they cannot be called quantum.

They do not rely on equations made up only of world constants.

In order to formulate a quantum mechanical model of superfluidity, it is necessary to take into account the mechanism for ordering of zero point oscillations of helium atoms, first considered by F. London almost a hundred years ago [

F. London showed that between atoms in the ground state, there is an interaction of the type of Van-der-Waals forces, which has a quantum nature. Atoms in the ground state (at T = 0) make zero-point oscillations. He considered atoms as three-dimensional oscillating dipoles connected to each other by electromagnetic interaction and called this interaction of atoms in the ground state as a dispersional one.

If to take into account that different modes of zero-point oscillation are must order at different temperatures [

γ 4 = α 2 a B 3 M α 3 2 m e ≅ 0.1443 g / cm 3 . (1)

where a B = ℏ 2 m e e 2 is the Bohr radius,

M α is the mass of He-4 nucleus,

m e is the electron mass,

α = e 2 ℏ c is the fine structure constant.

This value is in good agreement with the measured density of liquid helium equal to 0.145 g/cm^{3} at T ≃ T λ .

Calculating the temperature at which helium goes into the superfluid state gives the equality [

T λ = 1 3 M α c 2 α 6 k = 2.1772 K , (2)

which agrees very well with the measured value of T λ = 2.1768 K .

Consideration of zero-point oscillation in electron gas reveals the mode of these vibrations, in which attractive forces arise between the particles decreasing the ensemble energy. Comparing this energy decreasing of electron gas with its Fermi energy, we obtain the ratio of the transition temperature to the ordered superconducting state to the Fermi energy in the form of an equality that depends on the world constants only:

T c T F = 9 π 2 α 3 ≃ 5.5 × 10 − 6 (3)

Graphically, the dependence of the critical temperature T c calculated in this way on its measured value for type I and II superconductors is shown in

Particle physics proceeds from the assumption that the neutron consists of three fractional-charged quarks of the lower level. This makes it easy to explain the reaction of converting a neutron into a proton. To do this, just one of the d-quarks of neutron must turn into a u-quark.

Formulas consisting of world constants do not arise in this theory.

According to another assumption, a neutron is a structure similar to a hydrogen atom, but with a relativistic electron [

In this model, the process of converting neutron into proton does not require a complex explanation—it is a simple ionization.

Since electron and proton are bound by electromagnetic forces, the stable state of neutron can be found from the minimum energy condition.

This makes it possible to calculate the mass of the neutron, its magnetic moment, spin, and binding energy.

The results of these estimates are quite satisfactory in agreement with the data of measurements of neutron properties [

There is another important property of this model [

A hydrogen atom can be in a stable state with minimal energy or in one of the excited states. In the ground state of the Bohr atom the electron orbit fits one de Broglie wavelength. The excited states realize at 2, 3 or more de Broglie waves.

By applying this condition to determine the excited states of neutron, we can calculate parameters characterizing these states. For example, calculated magnetic moments are shown in

All these calculations are based on simple equalities consisting of world constants.

n | μ c a l c | experimental data | Ref. |
---|---|---|---|

n = 1 | −1.9367 | μ n 0 = − 1.9130427 ± 0.0000005 | [ |

n = 2 | −0.6247 | μ Λ 0 = − 0.613 ± 0.004 | [ |

n = 3 | 1.3779 | μ Σ Σ Λ 0 = 1.61 ± 0.08 | [ |

It is generally assumed that there are two quantum values with length dimension.

This is the Bohr radius

a B = ℏ 2 m e e 2 ≈ 5 × 10 − 9 cm , (4)

It characterizes non-relativistic quantum systems.

And Compton wavelength

λ C = 2 π α a B = 2 π ℏ m e c ≈ 2 × 10 − 10 cm , (5)

It arises in quantum theories.

In accordance with the refined Gilbert principle, a new fundamental length appears in this model

R * = α 2 a B = e 2 m e c 2 ≈ 3 × 10 − 13 cm . (6)

This radius determines the characteristic sizes of neutron and hyperons in order of magnitude and is included in the relations associated with them.

So the magnetic moment of neutron μ n ≈ e R * 2 coincides with the nuclear boron magneton in order of magnitude.

In this case, the theoretical and measured values of the deuteron binding energy differ by a numerical coefficient of the order of one.

The nature of nuclear forces in this case is described by a simple and well-known quantum mechanical effect, and there is no need to introduce gluons and the strong force (at least for light nuclei) [

The characteristic feature of neutrino that distinguishes it from all other particles is its extremely weak interaction with matter. At the same time, neutrinos can carry away at the speed of light part of the energy released during beta-decay.

According to Thomson’s theory, radiation scattering occurs due to the fact that the electric field of the incident electromagnetic wave accelerates electrons in the substance of a diffuser. As there are no magnetic monopoles in nature, only a particle that does not carry an electric field in its wave can avoid such scattering, transferring all its energy due to the magnetic component of its wave.

But is this magnetic wave possible?

It turns out that Maxwell’s equations have a such solution [

This solution is usually not considered, probably because it is not technically feasible. However, in nature it is realized in reactions with relativistic particles.

A magnetic oscillation in the ether must occur as a result of a reaction in which a particle with a magnetic moment that did not exist before is born relativistically quickly.

Since beta-decay gives birth to a relativistic electron that carries the magnetic moment, according to Maxwell’s equations, a magnetic gamma-quantum must be born that takes away part of the reaction energy. This gamma-quantum was called as neutrino in the twentieth century.

Since the excitation of a magnetic gamma-quantum is a purely electromagnetic process, it cannot be expected that neutrino physics should rely on equalities consisting only of world constants.

However, as a consequence of the existence of neutrinos, the occurrence of such equalities is possible.

So in the chain of reactions π ± → μ ± → e ± neutrino and two antineutrinos are born, which carry away some of the reaction energy. The fact that no other particles are born in these reactions allows us to estimate the masses of charged mesons, whose values are determined by the world constants (

Until last decades of the last century, measurement data that could shed light on the physical properties of the interior of stars was very poor. In the sense of such data simply almost did not exist.

But in recent decades, the technique of astronomical measurements has grown so much. The need data have appeared. On their basis, it has become possible to judge the state of the interior of stars.

Then it became clear that it was bad with the theory of the star interior. This is the result of the historical development of this theory.

One can assume that modern physics of stars appeared in the early twentieth century and an important milestone of this period was the work of R. Emden “Die Gaskugeln”. It laid the foundation for describing stars as gas balls characterized by various equations of state.

meson | measured meson mass m m e a s | calculated meson mass m c a l c | m c a l c − m m e a s m m e a s |
---|---|---|---|

π ± | 273.13 m e | 2 m e α = 274.1 m e | 3.5 × 10 − 3 |

μ ± | 206.77 m e | 3 2 m e α = 205.6 m e | − 5.8 × 10 − 3 |

According to R. Emden, the equation of state of the gas that forms stars, determines their characteristics. It can be either a dwarf or a giant, or the main sequence star, etc.

In the 30s of the last century, I. Langmuir discovered a new state of matter-plasma. Soon the largest astrophysicist of that time A. Edington realized that the interior of stars must consists from plasma. He built the standard model of a plasma star, much like the model of a gas ball.

At the same time, the main difference between gas with any equation of state and a plasma fell out of the attention of the creators of new astrophysics.

Plasma is an electrically polarized medium. It must have the effect of gravitationally induced electric polarization (GIEP), which is absent in any gas.

The GIEP effect plays an important role in establishing the equilibrium state inside stars and therefore it determines many properties of stars.

Taking into account the GIEP effect, the mass of stars is determined by the equality [

M ⋆ = 2 13 7 5 5 π 3 M C h ( A Z ) 2 = 27.4 M C h ( A Z ) 2 . (7)

where the constant M C h is called the Chandrasekhar mass, consists of world constants only:

M C h = ( ℏ c G m p 2 ) 3 / 2 m p , (8)

A and Z are the mass and charge numbers of the nuclei that make up the plasma of the star’s interior, G is gravitational constant, m p is proton mass.

This equality is consistent with the mass distribution of stars obtained from measurements [

The existence of electric polarization in the plasma of cosmic bodies, which occurs under the influence of their own gravity, leads to the fact that due to their rotation, these bodies have magnetic moments. The ratio of magnetic moments thus induced μ to their moments of rotation L turns out to be equal to [

μ L = G 3 c . (9)

This equality agrees well with the measurement data (

At our time, the goal of theoretical researches is always consistent with the Gilbert principle, since they are aimed at explaining some real physical phenomenon discovered by experimenters.

These theories are always correlated with measurement data without problems—they come from some actual experimental data.

Therefore, it is important to formulate the criterion of reliability of the theory so that it can be understood whether this theory has a scientific future.

The improved Gilbert principle stated above seems to satisfy this purpose. Indeed, the equations of quantum mechanics contain only world constants as coefficients. Therefore, combinations containing only world constants can be a solution to these equations.

Well, the fact that measurement data can confirm such decisions with amazing accuracy strengthens their role in theoretical physics.

The author declares no conflicts of interest regarding the publication of this paper.

Vasiliev, B.V. (2020) Principles of Constructing a Correct Microscopic Theory. Journal of Modern Physics, 11, 907-919. https://doi.org/10.4236/jmp.2020.116055