Gilbert’s Postulate and Some Problematic Physical Theories of the Twentieth Century

William Gilbert more than 400 years ago formulated a postulate that can be considered as main principle of natural sciences. According to this postulate, the criterion for the correctness of a theory can only be its confirmation by measurement data. In our time, all theories are confirmed by at least some experimental data. But sometimes the theory cannot explain parameters which can be considered as main for objects under study. Usually such “inexplicable” objects and dependencies are called empirical and it is assumed that they do not require theoretical explanation at all. In most cases, this means the fallacy of the used theory. So nowadays postulate Gilbert needs to be reformulated: the correct theory should describe ALL basic properties of objects of research. A number of theories developed in the twentieth century do not satisfy this formulation. In almost all cases, the reason for this is a misinterpretation of nature of objects of study. In particular, in order to satisfy Gilbert’s refined postulate, it turns out necessary to revise the theoretical descriptions: 1) nature of superfluidity and superconductivity; 2) nature of neutrinos; 3) nature of neutron; 4) nature of nuclear forces; 5) model of quarks with fractional charge; 6) internal structure of stars; 7) nature of the Earth’s magnetic field; 8) mechanism of thermomagnetic effect in metals.

this time.
The need for such a rethinking arises from the fact that theoretical physicists in the last century often considered the most fascinating and important thing to build theoretical models for those phenomena and objects for which experimental data have not yet been collected.
To create such theories in addition to knowledge needed fantasy, intuition and imagination. Therefore, the validity of such models, even if they are common accepted, may be questionable.
In the field of elementary particles, theoreticians often to replace the missing experimental data used symmetry considerations to systematize particles, for example, tables of particles based on Gell-Mann's quarks or a table, such as the Weinberg-Salam's standard model of elementary particles. These symmetrized tables look really nice, but the weakness of this approach is that the dropping out of even one basic particle, such as neutron (see below), violates the very principle of such systematization.
The reason that forces to reconsider a number of other theories of the XX century (for example, physics of stars) is connected with the progress of measurement techniques and obtaining new experimental data. Sometimes new measurement data do not fit into old theories. Apologists of these outdated theories often struggle hard for their survival. Several of these theories still dominate their fields of expertise and require partial or complete revision in our time.
In general, the past twentieth century brought remarkable scientific discoveries in the field of physics.
In the early twentieth century, nuclear physics was born and then rapidly developed. It was probably its greatest discovery. It radically changed the whole material and moral image of the world civilization.
At the same time, superconductivity was discovered, and a little later, and superfluidity. These super-phenomena promise mankind a giant leap of technology and economy.
At the beginning of the twentieth century, radio was born, which gradually led to television, and then radio technics spawned computers. Their importance is difficult to overestimate.
There was a science of quantum, which led to the appearance of quantum devices, among which lasers shine.
It can be a long list of branch of physical knowledge that gave us the twentieth century.
However, not all theoretical explanations for these discoveries seem perfectly correct.
William Gilbert (1544Gilbert ( -1603 developed the criterion of correctness of the theory more than 400 years ago. He formulated a postulate that can be considered as the main principle of the natural sciences [1]: All theoretical constructions that claim to be scientific must be verified and confirmed experimentally. Before Gilbert, false ideas did not fear of experimental verification. At that time the world of thought was incomparably more subtle than the ordinary and gross material world. A precise coincidence of a philosophical theory with direct experience almost degraded it dignity in the eyes of dedicated. The discrepancy between pre-gilbert theory and observations did not bother anyone. There were absolutely fantastic statements, from our point of view. So, W. Gilbert writes that he experimentally denied the popular belief that the force of a magnet can be increased by rubbing it with garlic.
However, the formulation of this postulate proposed by Gilbert seems somewhat simplified nowadays. It is applicable to relatively simple theoretical models.
Nowadays, it seems impossible to find researchers who would disagree with Gilbert in principle. Indeed, all well-developed theories of the twentieth century are consistent to some measurement data. But these theories may contradict other data, the existence of which they simply do not pay attention.
Therefore, in the application to the complex theoretical constructions that made up the essence of physics of the twentieth century, Gilbert's postulate needs to be clarified: Physical theory which claims to be an adequate description of the object of research has to explain ALL the experimental data obtained by studying it.
Without such clarification, in theoretical physics there is a paradoxical situation: there are theories of various phenomena that describe some their properties but cannot explain the main features of these phenomena.

Some Theories Created in the Twentieth Century, Which Can Not Explain the Main Features of Studied Phenomena
Here is a short list of such theories:

Superfluidity
Superfluidity was discovered in the late 1930s (see Figure 1). The main features of the phenomenon of superfluidity in liquid helium became clear soon after its discovering [2]. and the density of superfluid helium is 3 4 0.145 g cm γ ≈ . It will be shown below that these values are well described by formulas containing only world constants Journal of Modern Physics

Superconductivity
Superconductivity was discovered in the early 20th century (see Figure 2), but for a long time it was thought to be the most enigmatic phenomenon in condensed substances.
Its theory appeared a few decades later. Now the generally accepted theory of superconductivity successfully explains, for example, the temperature dependences of heat capacity and energy gap in superconductors. But it cannot calculate main property of superconductors-the critical temperature of transition into a superconducting state. Therefore, this theory should be replaced by one that is able to explain all the main properties of specific superconductors.

Neutrino
The existence of the neutrino was predicted by W. Pauli in the early 30s of the last century (see Figure 3).   The effect of reactor neutrinos on the substance was found after about two decades (see Figure 4).
In neutrino physics, the triad-e-neutrino, μ-neutrino and τ-neutrino-and details of their mutual transformations are considered. But the main property of neutrino-its unusually high penetrating power-remains unexplained. This unusual property distinguishes neutrino from all other particles.
In addition, a special fundamental weak interaction of nature is introduced to explain neutrino-related reactions. The necessity of this introducing is justified by the special properties of the mysterious neutrino.

The Quark Theory
The quark model introduces new subparticles from which all other elementary particles must consist. At this a particular importance has the explanation of the mechanism of transformation of neutron into proton. An attractive invention is the scheme proposed by Gell-Mann (see , in which this transformation is carried out by replacing only one quark with a fractional charge to another. However, no particle with fractional charge was experimentally discovered. That demanded to admit the existence of the specific confinement of quarks.     It is also important that the quark model don't make possibility to calculate the basic parameters of neutron at comparing them with the properties of proton.

Nature of Nuclear Forces
The problem of nuclear forces related to the quark model required for its explanation the introduction of a new type of fundamental interactions-a strong interaction-and a new type of non-observable particles-gluons. It is assumed that they must bond the nucleons in nuclei. This approach makes it possible to obtain a fully developed picture of nuclear forces, but does not allow to solve the main problem-to calculate the binding energy of nuclei.

Astrophysics
Astrophysics in its modern state was formed by the middle of the twentieth century and is a completely unique branch of physics, because it does not rely on measurement data and ignores Gilbert's postulate.
However, the technological progress of astronomical measurements to the present time gave the ability to know about a dozen interdependence of the main parameters of the stars. These dependencies are radius-temperaturemass-luminosity of close binary stars, magnetic fields of stars, etc.
Naturally, it turned out that the existing theory of stars, built without reliance on any measurement data, cannot explain these dependencies and should be revised.

The Magnetic Field of Earth
Attempts to explain the mechanism of the Earth's magnetic field have been undertaken for several centuries. Apparently, the first model of the Earth's magnetic field was created by W. Gilbert more than 400 years ago [1].
Einstein included this problem as one of the three main tasks of the science of his time.
Currently a hydrodynamical model is accepted. Despite some difficulties, its parameters can be chosen so that the magnitude of the magnetic field near the poles of the Earth will be approximately equal to 1 Oe, which is consistent with the measurements.
However, in the second half of XX centuries space flights began and the technique of astronomical measurements obtains further developing. As a result, the magnetic fields of most objects of the Solar system and a number of stars, including pulsars, were measured. It turned out that the gyromagnetic relations of all these space objects are approximately equal to the ratio of the world constants G c ( Figure 14).

Journal of Modern Physics
Because the problem of terrestrial magnetism has become a special case of a common problem for all celestial bodies. This required rejecting the hydrodynamical model to create a new general theory of cosmic bodies magnetism. phenomenon and further progress in its study [3].

What Should Be Theories That They Explain All Main
The modern theory of superfluidity explains the general characteristics of this phenomenon: the energy spectrum of excitations, thermodynamics of superfluid helium, its heat capacity, etc.

λ-Transition
However, the energetically profitable transition of helium to a superfluid state should occur due to the appearance of some additional forces of attraction in the ensemble of its atoms lowering the ensemble energy.
Therefore, the most important task of the theory is to explain the mechanism of attraction that causes the transition to the superfluid state and the reason that this transition in helium-4 occurs at a temperature of about 2K.
According to Gilbert's refined principle, the theory should provide a quantitative explanation of all the characteristic parameters that are observed in this phenomenon.
Therefore, the refined theory of superfluidity should first explain the physics of the λ-transition and way this temperature is almost exactly half the boiling point of helium:

London's Dispersion Forces
The feature of helium-4 is that the atom has no total charge or dipole moments.
Nevertheless, a certain electromagnetic mechanism should be responsible for phase transformations in its condensed state. This is evidenced by the scale of energy change in this transition, which corresponds to other electromagnetic transitions in condensed matter.
In the 30-ies of the last century F. London showed [4] (see Figure 8), that be-  as three-dimensional vibrating dipoles connected with each other by electromagnetic interaction. He called this interrelation of atoms in the ground state as a dispersion interaction.

The Interaction of Zero-Point Oscillations of Helium Atoms
F. London showed that the electromagnetic interaction of zero-point oscillations of helium atoms leads to their attraction. Since there is no repulsion between particles of boson gas, the occurrence of attraction should lead to liquefaction of boson gas. However, F. London did not pay attention to the fact that there are two types of vibrations of the shells of symmetric atoms-the vibrations of neighboring atoms can be longitudinal or transverse with respect to the line connecting neighboring atoms. The interaction energy in these two modes turns out to be different [5]. The ordering of longitudinal oscillations leads to the liquefaction of helium. The ordering of transverse oscillations occurs at twice less temperature. It is remarkable that this temperature is described by the formula consisting of world constants only (Equation (2.2)). Below this temperature, the system of zero-point oscillations of atoms is completely ordered, i.e. atoms form a single quantum ensemble of the superfluid state.
Results of experimental measurements confirm the correctness of this theoretical evaluation with high accuracy Equation (2.1)).
The consideration of superfluidity as the ordering of zero-point oscillations allows to calculate all basic parameters of this phenomenon. (see Table 1 [5])

Superconductivity as a Result of the Ordering of Zero-Point Oscillations of Electron Gas
The main difficulty of modern theory of superconductivity (BSC) is that it cannot explain why this phenomenon occurs in different metals at different temperature. Superconductivity can be considered as superfluidity of electron gas. These phenomena are similar. Considering the superconductivity as a result of ordering of zero-point oscillations in electron gas, it is possible to show that the where α is the fine structure constant. This is consistent with the measurement data ( Figure 9, [5]).
As for the external magnetic field of the critical value, which destroys the coherence of zero-point oscillations of electronic pairs, the theoretical evaluation of this field is also in good agreement with the measurement data [5].
The consideration of superconductivity as sequens of the electron gas zero-point oscillations ordering gives possibility to explain all main properties of all separate superconductors.

Magnetic Dipole Radiation in Maxwell's Theory
It is usually accepted to consider neutrino as a specific particle moving at the speed of light and having no charge and mass (the latter with some reservations).
This speaks that there is much in common between neutrinos and photons, although their penetrating abilities in matter differ by many orders of magnitude.
This fact forces to consider the problem of electromagnetic waves radiation in more detail.
Let, for simplicity, the problem is formulated in such a way that there are no electric charges, electric dipoles and quadrupoles. And the electromagnetic radiation in aether can arise only due to the time-varying magnetic moment  2)) with measurement data [5]. Circles relate to type-I superconductors, squares show type-II superconductors. On the abscissa, the measured values of critical temperatures are plotted, on ordinate, the calculated estimations are plotted.
where * t is retarded time.
By definition, in the absence of free charges (i.e. at 0 ϕ = ) in this electromagnetic disturbance, the electric field strength will have the value [7]: and intensity of magnetic field [7]: Thus, the amplitude of oscillations of the electric field generated by changes in the magnetic moment depends only on the second time derivative of the function describing its changes. At the same time, the first time derivative additionally contributes to the amplitude of the magnetic field oscillations.
In this case, two options are possible, since two types of magnetic emitters are possible.

Photons
This option is studied in all courses of electrodynamics. It is realized in the case when the magnetic dipole performs the motion described by the differentiable The same solution has problems where the oscillations of the magnetic moment are described by more complex formulas, if the spectrum of these oscillations can be decomposed into harmonic components.
For harmonic oscillations at a considerable distance from the oscillating dipole, the second term in the formula (3.5), which depends on m  , is λ/R times smaller than the first term (here λ is the length of the generated wave).
Therefore, term m  can be neglected. The result is that in this case fields E and H are equal to each other and only are turned relative to each other by 90 degrees.

Magnetic Excitation of Aether
More precisely, this excitation of vacuum should be classified as a kind of particle, because it is characterized by a very short time interval.
An example of the radiation of such a particle is β-decay, in which a free electron carrying a large magnetic moment arises relativistically quickly.
Another example is the transformation of π-meson into μ-meson. π-meson has no magnetic moment, but μ-meson does. Thus the time dependence of the magnetic moment in this reaction has the form of a very sharp Heaviside's rung, which equal to zero for negative arguments and one for positive ones. (At zero, this function requires additional definition. It is usually convenient to set it to zero equal to 1/2): An unusual property that pure magnetic photon m  must possess arises due to the absence of magnetic monopoles in nature. The fact that normal photons, with the electric component, scattered and absorbed in matter with electrons. In the absence of magnetic monopoles, a small energy magnetic photon must interact extremely weakly with the substance and its free path in the medium must be about two dozen orders of magnitude greater than that of a normal photon [7]. Journal of Modern Physics Thus, Maxwell's equations say that the radiation of free electron at β-decay should generate in vacuum a pure magnetic excitation, similar to photon, but weakly interacting with the substance.

Neutrino and Antineutrino
According to the electromagnetic model of the neutron, the generalized angular momentum of a relativistic electron, which forms a neutron together with a proton, is zero [10]. Therefore, the self magnetic moment of the electron is not observed.
With neutron β-decay, the electron acquires freedom, and with it spin and magnetic moment. Given that the emitted electron has a speed close to the speed of light, this process should occur abruptly.
At that a δ-shaped magnetic field burst generates, which is commonly called as antineutrino.
Since in the initial bound state (as part of the neutron) the electronic generalized angular momentum was equal to zero [10], and in the final free state its spin is 2  , taking into account the law of conservation of angular momentum, the magnetic γ-quantum must carry with it the angular momentum equal to 2 −  .
Another implementation of the magnetic γ-quantum must occur in the reverse process-in K-capture. In this process, the electron, which originally formed the shell of the atom and had its own magnetic moment and spin, at some point is captured by proton of nucleus and forms neutron with it. This process can be described by the inverse of the Heaviside function. In this process, a magnetic γ-quantum of the inverse direction of field with respect to the vector of its propagation R should arise (Figure 10).

Results Shortly
The concept of neutrino as magnetic excitations of aether [11] explains all the basic of their properties: In addition, this concept opens a new page in the study of mesons, with quantitatively predicting their masses.
What does tau-neutrino have to do with this concept remains unclear.
Due to the fact that neutrino radiation is a purely electromagnetic process, there is no need to introduce a fundamental weak (or electro-weak) interactions of Nature, which should be attributed to the category of speculation.

Neutron Properties
It is commonly thought that the Bohr's atom is the only possible construction that can be constructed from proton and electron. This is true, if to use a non-relativistic electron. In this case, the equilibrium state between proton and electron is established by mutual attraction of their charges. At that the distance However, the situation changes radically at distances of the order of 10 −13 cm.
If an electron orbit has this value, magnetic field of order ( ) Detailed calculations [10] show that the equilibrium radius of such orbit is approximately equal to 10 −13 cm, and the mass of an electron taking into account the relativistic effect is equal approximately to 370m e .
However, almost all of this weighting of electron is compensated by the mass defect, which occurs due to the binding energy of electron to proton, so that the total mass of neutron only slightly exceeds the mass of proton. The result is the correct prediction for the neutron decay energy.
It is remarkable that the neutron magnetic moment can be calculated in this way, and the calculated value coincides with the measured one up to 10 −4 .
Thus, all the measured properties of neutron (except its lifetime) in this theory find a quantitative explanation. The calculation of the neutron lifetime should be carried out taking into account additional factors.
The most important consequence of this consideration is the fact that neutron is not elementary particle but it is a kind of structure, like a hydrogen atom only with relativistic electron. It discredits the Gell-Mann's quark model completely.

Quantum-Mechanical Nature of Nuclear Forces
The rapid development of nuclear technology in twentieth century made the understanding of nature of nuclear forces a most important task of theoretical physics.
By 30s of last century, the experimenters found that the nuclei consist of protons and neutrons, and neutrons decay with the emission of electrons. For the first time attention to the possibility of explaining nuclear forces on the basis of the electron exchange effect drew apparently I.E. Tamm [12]. However, later the predominant model in nuclear physics was the exchange of π-mesons, and then the exchange of gluons.
The reason for this is clear. To explain the magnitude and radius of action of nuclear forces need a particle with a small natural wavelength. A nonrelativistic electron is not suitable for this.
Because of this, the assumption about the existence of a special strong interac- However, on the other hand, models of π-meson or gluon exchange were not productive either. These models could not give a sufficiently accurate quantitative explanation of the binding energy of even light nuclei.
It turns out that this explanation can be obtained by solving the corresponding quantum mechanical problem. At the same time, to explain the nature of nuclear forces, the hypothesis of the existence of a strong interaction can be abandoned.
In 1927, a quantum mechanical description of the simplest molecule-the molecular ion of hydrogen-was published. The authors of this article V. Heitler and F. London [13] has calculated the attraction that occurs between two protons at electron exchange. This exchange is a quantum mechanical effect and does not exist in classical physics. (Some details of this calculation are given in [10] [14]).
The main conclusion of this calculation is that the binding energy between two protons, which occurs due to the electron exchange, is in order of magnitude close to the binding energy of proton and electron (the electron energy in the first Bohr orbit). This conclusion agrees satisfactorily with the measured data, which give results different from the estimated less than two times. The calculation method developed by Heitler and London can be applied to the calculation of the binding energy of two protons that exchange with relativistic electron which is part of neutron. The energy obtained as a result of this calculation is quite satisfactory in agreement with the experimentally measured value of the deuteron coupling energy [10].
The extension of the results of this calculation to the light nuclei allows us to obtain the values of their binding energy consistent with the measurement data. Thus, the results of this calculation show that in order to explain the nature of nuclear forces there is no need to invent some fundamental strong interaction of Nature. At least in the case of light nuclei, nuclear forces are explained by quantum-mechanic way.

Astrophysics
Star physics stands apart from other physical Sciences. Until the last decades of the twentieth century, almost nothing was with certainty known about the internal structure of stars. However, in the last decades of the twentieth century, astronomers have measured a number of dependencies of parameters of stars. To date, already there are about a dozen of such dependencies. That are interdependencies of the temperature-radius-luminosity-mass of close binary stars, spectra of seismic oscillations of the Sun, distribution of stars by mass, the magnetic fields of stars etc. All these dependencies are determined by phenomena occurring inside stars. Therefore, the construction of the theory of the internal structure of stars should be based on these quantitative data as on boundary conditions. However, modern astrophysics prefers a more speculative approach: qualitative theories of stars are developed in detail, which are not brought to such quantitative estimates that could be compared with astronomic data.
Of course, the existence of dependencies of stellar parameters measured by astronomers is known to the astrophysical community. However, in modern astrophysics it is accepted, without finding an explanation, to refer them to the category of empirical and believe that they do not need an explanation at all.
To reach agreement of the theory with the available data of astronomical measurements, it is necessary to refuse some astrophysical constructions which are generally accepted today. First of all, we need to change the approach to describing the equilibrium of matter inside stars. It should be noted that the interior of the stars is plasma-electrically polarized medium. Therefore, the equilibrium equation of the interstellar substance should take into account the role of gravitationally induced electric polarization (GIEP). Taking into account the GIEP of intrastellar plasma allows us to construct a model of a star in which all the main parameters-the mass of a star, its temperature, radius and luminosity-are expressed by certain combinations of world constants, and the individuality of the stars is determined only by two parameters-mass and charge numbers of atomic nuclei from which plasma of these stars is constructed. Thus it is possible to explain quantitatively and with satisfactory accuracy all dependences measured by astronomers ( Figure 11, Figure   12) [15].
Taking into account the gravitationally-induced polarization of the Sun's core, it is possible to calculate the spectrum of its seismic oscillations [15]. This spectrum is in good agreement with the measurement data obtained in recent decades ( Figure 13).
Taking into account the gravitationally-induced polarization, it is possible to construct the theory of magnetic fields of stars, consistent with the observational data ( Figure 14).
In general, taking into account the GIEP effect allows to get an explanation of all data of astronomical measurements.
An important characteristic feature of the model of a star, built taking into account the GIEP, is the absence of collapse at the final stage of development of stars, as well as the absence of "black holes" in nature, resulting from such collapse.

Thermomagnetic Effect in Metals
The theoretical explanation of the thermomagnetic effect (TME) in metals stands out among the theories discussed above, since there was no such theory in the twentieth century. Previously, there was an opinion that this effect does not exist.
By the middle of the XX century a number of thermomagnetic effects in semiconductors had been discovered, studied and theoretically explained.

Nature of Magnetic Field of Earth
In the twentieth century, as before, it was believed that the most important experimental fact, which must satisfy the model of the Earth's magnetic field, is  Figure   15). He assumed that inside the Earth there is an area filled with magnetized ferromagnetic (if to use the modern term). More recent studies have shown that the temperature in the central region of the Earth has high temperature-above the Curie temperature of ferromagnets. Therefore, the Earth's core can't be magnetized.
Later, many different models of the Earth's magnetic field were proposed. In particular, several models based on the thermoelectricity effect. In the 40s of the last century, the hydrodynamic model was developed [18], which won the recognition of experts.
It should be noted that the operation of such a mechanism requires the presence of a certain initial field, which can be strengthened. In the presence of only the space field (~10 −7 Oe), the performance of this model is highly questionable.
Doubts about the performance of the hydrodynamic model in the following decades have arisen in many scientists, and for this reason, until recently, there are new models of this phenomenon.   [19] (see Figure 16).
He suggested that the magnetic field is generated not only by a moving electric charge, but also by any moving neutral mass. Later began to assume that this may be a consequence of the fact that the electric charges of the electron and proton are not equal to each other. It was estimated that their difference can be very small-only 10 −18 e. However, such a negligible difference was enough to at all cosmic bodies due to their rotation around its own axis there was a magnetic field of about the magnitude that was obtained from measurements.
Naturally, in this approach, there must be a connection between the magnetic moment of the cosmic body μ and its rotational moment L. Blackett showed that the ratio of these values (the gyromagnetic ratio) depends only on the world constants: , where G is the gravity constant, is the light velocity.
However, Blackett's hypothesis was rejected, despite its beauty and attractiveness. Blackett himself refused it. High-precision experiments conducted by Blackett, as well as other experimenters, showed that electrically neutral massive bodies do not create magnetic fields of the desired intensity in the laboratory condition.

The Measurement Data of Magnetic Fields of Cosmic Bodies
In the first half of the twentieth century many geophysicists (see Figure 17) was involved in the problem of terrestrial magnetism. Their task of the first plan saw Journal of Modern Physics  To explain this phenomenon is quite simple, given the phenomenon of the GIEP in the plasma of all large cosmic bodies [20].
However there is peculiarity at formation of terrestrial magnetism. Pressure and temperature inside the Earth are not as high as in stars. If stars are consist of electron-nuclear plasma, then in the central region of the Earth only electron-ion plasma can exist. That requires attentional consideration for a successful theoretical description of the Earth's magnetism [20].

Conclusions
The development of physics in the twentieth century led to the appearance of many new its branches. At first, many of these discoveries gave the impression of a certain mystery. So, many scientists have called superconductivity still several decades after its discovery as the most mysterious phenomenon in the physics of condensed matter. The penetrating power of neutrinos is still often called mysterious. To explain mysterious phenomena in the twentieth century, new concepts were often introduced. So, for example, strong and weak fundamental interactions, gluons, quarks with a fractional charge, etc. appeared. This method of constructing theories is valid only under one condition-it is necessary that the construction of the theory was carried out in full accordance with Gilbert's postulate. It is obvious that without full confirmation of the measurement data, the theories constructed in this way turn out to be speculations.
In some cases, when this type of theory was presented with the help of a complex mathematical apparatus, it seemed that the conclusions following from these theories have found their mathematical confirmation and this is enough to recognize their correctness.
However, such mathematical confirmation and confirmation of the theory by means of some systematization and construction of tables should not replace experimental verification. At the same time, due to the large intricacy of some theories, it becomes important how successfully these theories explain ALL the properties of the object under study, or at least ALL MAIN its properties.

Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.