Elementary Particles Result from Space-Time Quantization
A. Meessen
UCLouvain, Louvain-la-Neuve, Belgium.
DOI: 10.4236/jmp.2021.1211094   PDF    HTML   XML   320 Downloads   2,139 Views   Citations

Abstract

We justify and extend the standard model of elementary particle physics by generalizing the theory of relativity and quantum mechanics. The usual assumption that space and time are continuous implies, indeed, that it should be possible to measure arbitrarily small intervals of space and time, but we ignore if that is true or not. It is thus more realistic to consider an extremely small “quantum of length” of yet unknown value a. It is only required to be a universal constant for all inertial frames, like c and h. This yields a logically consistent theory and accounts for elementary particles by means of four new quantum numbers. They define “particle states” in terms of modulations of wave functions at the smallest possible scale in space-time. The resulting classification of elementary particles accounts also for dark matter. Antiparticles are redefined, without needing negative energy states and recently observed “anomalies” can be explained.

Share and Cite:

Meessen, A. (2021) Elementary Particles Result from Space-Time Quantization. Journal of Modern Physics, 12, 1573-1605. doi: 10.4236/jmp.2021.1211094.

1. Introduction

Many types of elementary particles have been discovered and characterized by means of empirically defined quantum numbers. The resulting standard model (SM) describes known facts, but the existence and properties of these quantum numbers could not be explained. There are even new experimental results that do not agree with predictions based on the SM in its present form. They may be called “anomalies”, as if they were minor exceptions, but it becomes increasingly obvious that some kind of new physics is needed. A science writer summarized the present situation by stating that the SM is “brilliant, but not perfect” and perhaps even “completely flawed” [1]. It is at least partially adequate, but requires justifications and extensions. Actually, we are living a typical “crisis”, like those that could only be solved by abandoning some basic assumption [2]. We have thus to begin with recalling why this was necessary and led to the development of the theory of Special Relativity (SR) and Quantum Mechanics (QM). They are even the stepping stones that will allow us to go further in the same direction.

Classical mechanics was based on so-called “principles”, since they could not be proven, but seemed to be certain. Actually, they were suggested by extrapolating observations. It seemed obvious, for instance, that measurements could be made ideally precise and that possible results can vary in a continuous way. Newton postulated also the existence of a motionless “absolute space” and a steady “flow of time”, to justify the special status of inertial reference frames. They are those frames where free particles remain in the same state of motion, unless they are perturbed by an applied force. All these frames would have a special status, if they were moving at some constant velocity with respect to absolute space in absolute time. The resulting theory was very successful, but there appeared some unexpected facts, requiring fundamental modifications.

The discovery of interference phenomena proved that there are “light waves”. They were assumed to be similar to sound waves and should thus result from propagating vibrations. Since it had been stated long ago that even apparently empty space is filled by a peculiar substance, called “aether”, this medium seemed to be adequate. Being globally at rest, it would even justify Newton’s concept of an absolute space. However, when Michelson tried to determine the velocity of the Earth with respect to this ether, he found no evidence of any relative motion. This result led Voigt to propose that aether is a special medium, since light waves are there always propagating at the same velocity c for any inertial reference frame [3]. Even Maxwell’s theory of EM waves treated vacuum as if it were a medium that is electrically and magnetically polarizable. These properties determined then the value of the velocity c. Lorentz adopted this point of view, but attributed the constancy of c in this aether to a more general transformation law for space-time coordinates [4].

Albert Einstein saw that these interpretations of Michelson’s results are inconsistent with Galilean relativity and raised thus a fundamental problem. He solved it in 1905, by introducing a radically new idea: measurements of space-time coordinates in inertial reference frames can only yield results that lead always to the same value c for the velocity of EM waves in vacuum. This value is thus a universal constant, but not because of a peculiar medium. It results from a restriction that Nature imposes on measurements of lengths and durations in inertial reference frames. Space and time are thus not substance-like, but merely defined by what is measurable. This idea led to the development of the theory of SR. One of its major consequences was that the total energy E of any material body that is freely moving in an inertial frame at a velocity v is determined by

E 2 = E o 2 + ( p c ) 2 (1)

Eo = moc2 and the momentum p = mov, where mo is the rest-mass of this body. The development of QM resulted from other unexpected experimental results. By analyzing observed properties of EM waves inside cavities at various temperatures, Planck discovered in 1900 that light is emitted and absorbed by the walls in the form of energy quanta E = h ν for any particular frequency ν.Einstein related this fact to (1) and concluded that light is composed of particles. Since EM waves propagate in vacuum so that their frequency ν and wave length λ are related by ν = c / λ the rest-mass of these light quanta is mo = 0. Their energy E and momentum p are thus defined by

E = h ν and p = h λ (2)

Planck had also discovered that energy exchanges are possible with vibrating electrically charged particles when their motions are restricted by a “quantization rule”. Bohr applied it to circular motions of electrons inside atoms and found excellent results. This quantization rule related the positions and velocities along a closed trajectory to Planck’s constant h. It was empirically justified, but unexplained until Louis de Broglie introduced another revolutionary idea: even electrons are associated with a wave by means of (2). Closed orbits have then to allow for stationary waves.

Since the relation p = h / λ is only valid for free particles, while inside atoms, the electrons are constantly subjected to forces, Schrödinger generalized this relation in 1925. He did this, by defining local values of the components of the momentum vector p and quasi-instantaneous values of the energy E in terms of first-order partial derivatives of a wave function ψ(r,t). Born proved that this function defines the probability amplitude that the electron is present at the point in space-time that is designated by r and t. Heisenberg explained in 1927 that physical laws have to include Planck’s constant h, since it limits simultaneous knowledge of pairs of conjugate observables, like x and px or t and E. There are thus two universal constants, c and h, imposing universal restrictions on some types of measurements.

These facts are well-known, but did we realize that a similar change is now required? Actually, we continued to believe that space and time have to be continuous. This idea was suggested by another extrapolation. Since differential equations for ψ(r,t) are valid at atomic and nuclear scales, it was assumed that they should even remain valid at arbitrarily small scales. We wondered therefore in the 1960th if Nature could not impose a third restriction. Can we really exclude the existence of an ultimate limit for the smallest measurable length? To answer this question, we considered that the value a of this “quantum of length” is unknown, but has to be a universal constant for all inertial reference frames like c and h. This condition was sufficient to construct a theory of space-time quantization (STQ). It showed that SR and QM have to be generalized for wavelengths λ → 2a and energies Ehc/2a when a ≠ 0, but that would not lead to internal contradictions [5]. It is only necessary to modify some habits of thought. Once this was established, we had still to find out if it is physically true that a ≠ 0, though this value may be extremely small. Otherwise, we would have proven that space-time is really continuous.

We verified thus if STQ could account for known properties of elementary particles [6]. It had been possible, indeed, to construct the SM in the 1970th, by introducing empirically defined quantum numbers, but their existence remained unexplained. The basic problem was that elementary particles have no parts and thus no structure, but can be distinguished from one another by means of quantum numbers. They had thus to result somehow from properties of their wave function (for a single particle) or field (for any number of particles of the same type). It turned out that this is true, because of STQ. It accounts for the mysterious quantum numbers and justifies the SM [7].

The purpose of this article is to present results of further investigations. Chapter 2 does briefly recall the basic concepts of STQ and defines “particle states” by possible variations of wave functions at the smallest possible scale in space and time. It is also necessary to reexamine some related concepts. Chapter 3 provides natural extensions of the SM of elementary particle physics. Chapter 4 considers recently discovered anomalies and shows that they can be explained by STQ. Chapter 5 summarizes the background and main results of this exploration. We endeavor to remain also understandable by non-specialists, since STQ concerns our view of reality and could thus be relevant for all humans.

2. Basic Concepts

2.1. Generalized Definition of States of Motion

First of all, we have to know why space has been assumed to be continuous. Some philosophers of Greek Antiquity thought that everything is constituted of indivisible entities, though they are too small to be observed. However, the Pythagorean Theorem was sufficient to prove that this is not true for lengths. If the sides of a square are equal to an atom of length a, the diagonal of this square is a 2 instead of a. The sides and the diagonal of a square are not commensurable. STQ discards this objection, since it concerns only possible results of independent measurements, performed along different reference axes in an arbitrarily chosen inertial frame. All other lengths can then be calculated.

The existence of a finite, universally constant quantum of length a would thus imply a quantization of possible values of space-time coordinates (x, y, z, ct). It might be objected that this is incompatible with the Lorentz transformation, but this law was based on the assumption that the spectrum of possible values of space-time coordinates is continuous. Let us consider this problem in more detail. A single elementary particle and the center of mass of any object are points that could be precisely localized by measuring its coordinates. It is then possible to define states of motion of any particle that is freely moving along the x-axis of an inertial reference frame by means of its wave function

ψ ( x , t ) = A e i ( k x ω t ) where p = k and E = ω (3)

The amplitude A is constant, while k = 2 π / λ , ω = 2 π ν and = h / 2 π . This accounts for (2) and also for Einstein’s relation (l) when ψ ( x , t ) satisfies the Gordon-Klein equation

x 2 ψ c t 2 ψ = ( m o c / ) 2 ψ (4)

This differential equation has to be generalized when a ≠ 0. It is sufficient to modify the definition of the local values of the momentum p at the smallest possible scale, so that

x 2 ψ D x 2 ϕ ( x ) = ϕ ( x + a ) + ϕ ( x a ) 2 ϕ ( x ) a 2 ϕ ( x ) (5)

It follows that

D x 2 e i k x = e i k a + e i k a 2 a 2 e i k x = sin 2 ( k a / 2 ) ( a / 2 ) 2 e i k x

Since these considerations apply also to possible values of E/c when ct is quantized, Einstein’s energy-momentum relation (1) becomes

sin 2 ( a E 2 c ) sin 2 ( a p 2 ) = sin 2 ( a E o 2 c ) (6)

This relation is more general, but implies the existence of a finite limit for the highest possible energy of free particles in inertial reference frames. Indeed, the energy E is only real when

a E 2 c π 2 or E E u = h c 2 a

Initially, we thought that E u defines the total energy content of our Universe, but it is more appropriate to say that it is the energy of a photon of smallest possible wavelength λ = 2a. We can even identify this photon with the single particle that constituted the initial state of our Universe. Usual theories have to assume an infinite energy, which yields an unphysical singularity. The Lorentz transformation seemed to exclude a finite quantum of length, but that is not true. The Lorentz transformation can be generalized, indeed, by requiring the invariance of (6). We get then even a deterministic law, since the spectrum of possible values of E and p remains continuous when space-time is quantized. We have thus to conclude that it is possible to account for the invariance of c, h and a for all inertial frames. The proposed theory of STQ is logically consistent.

2.2. Definition of Particle States

Several decades of impressive experimental investigations and theoretical research revealed that elementary particles are characterized by a set of quantum numbers, because of conservation laws that are similar to those of QM. Why these quantum numbers do exist remained unexplained, but we expected that they could be related to properties of wave functions. They can only be defined, indeed, for those points of space-time where elementary particles could be localized by means of ideally precise measurements. This depends on the value of a, but the origin and orientation of the x-axis can be freely chosen. It follows that when x is a possible result, it is sufficient to reverse the orientation of the chosen axis, to see that −x is also one. Since their separation has also to be measurable, 2x = na, where n is an integer number that can be even or odd. This condition yields two possible spectra:

x = 0 , ± a , ± 3 a , and x = ± a 2 , ± 3 a 2 , (7)

Figure 1. The existence of a finite quantum of length a yields two spectra for possible results of ideally precise measurements of the coordinate x. This allows for modulations of the function ϕ ( x ) at the smallest possible scale.

The normally expected lattice includes x = 0, but there is also a symmetrically intercalated one. In general, the position of an elementary particle is not precisely known, but the probability distribution | ϕ ( x ) | 2 is positive and single valued for every measurable value of x. This yields a degree of freedom, since the function ϕ ( x ) can vary along the x axis as indicated in Figure 1.

The wave function or field ϕ ( x ) can thus have the same sign or the opposite sign at neighboring points. Since ϕ ( x ) is a complex function, we can even define the modulation on the intercalated lattice by

e i u x π = ± 1 where u x = 0 , ± 2 , or u x = ± 1 , ± 3 , (8)

Figure 2 accounts for (8), since the arrow can only point upward or downward. Its actual orientation is defined by the value of the quantum number ux.

Since it is only required that ux is an integer number, this allows for transitions by means of quantum-jumps. They correspond to sudden rotations of the arrow by one or several half-turns toward the left or the right. The same reasoning is valid for the four reference axes that are used to measure space-time coordinates (x, y, z, ct) in any freely chosen inertial frame. Particles states are thus unambiguously defined by four (ux, uy, uz, uct) quantum numbers in any inertial reference frame. Since they are associated with another set (−ux, −uy, −uz, −uct), they do respectively define particle and antiparticle states. Particles were defined as being those entities that are more numerous in our Universe.

Figure 2.Particle and antiparticle states are defined by the quantum number ux.

It should be noted that it is not necessary that the four reference axes are orthogonal to one another and that by inversing the orientation of the x-axis in Figure 1, we do automatically reverse rotations in every plane that is perpendicular to this axis. A left one becomes a right one and vice-versa. The definition of particle and antiparticle states is thus intimately related to space and time. The transformation x → −x implies that ux → −ux. The parity operator P inverses the orientation of the 3 chosen reference axes and thus also the sign of the spatial quantum numbers (ux,uy,uz). The time inversion operator T changes the sign of uct and C = PT transforms a particle into its antiparticle.

For historical reasons, C was called the “charge inversion” operator, but even electrically neutral particles have antiparticles. The charge q = Qe is actually defined by means of the energy-momentum relation (6), when p is replaced by p + qA and E by E + qΦ, where A and Φ are the vector and scalar potentials of the EM field. They can vary in space and time, but all potentials involve an arbitrariness, since only their derivatives are physically relevant. This arbitrariness can be removed by imposing for instance that Φ = 0. The charge number Q is then defined by

Q = u x + u y + u z 3 (9)

The average value of the spatial quantum numbers is sufficient, since Q could also be defined by the electric field alone. The theorem that CPT = I is a direct consequence of STQ, though it is also valid in general and thus even for the usual theories, treating space-time as if it were continuous.

2.3. The Standard Model and Dark Matter

The first great succes of the SM of elementary particle physics was to account for the fact that nucleons are composed of 3 elementary particles. They were called up and down quarks, by analogy with possible orientions of the spin vector of electrons along any given z-axis. It became thus customary to speak of u and d quarks. They are usualy considered as physically real entities, but are merely two possible particle states. Since protons and neutrons do respectively carry the charges Q = 1 and Q = 0, they have to contain (uud) and (udd) quarks, when Q = 2/3 for u quarks and Q = 1/3 for d quarks. Even (uuu) and (ddd) baryons could be produced by means of accelerators, but quarks are spin 1/2 fermions. It was thus necessary to assume that quarks have another property that yields 3 distinct states. By analogy with three-chromatic color perception, they were called red, green and blue (R, G, B) color states. This terminology indicates that the reason for the existence of this property was unknown.

According to STQ, the simplest particle and antiparticle states would be defined by (ux, uy, uz) when these quantum numbers are equal to 0 and ±1, while uct = 0. Using orthogonal axes, we get a cubic lattice, but in Figure 3 we consider only those lattice points that form two adjacent cubic cells with a common diagonal. The u and d quarks are then defined by (1, 1, 0), (1, 0, 1), (0, 1, 1) and (0, 0, −1), (0, −1, 0), (−1, 0, 0). These states correspond to the vortexes of the red and green triangles. Because of (9), the u and d quarks carry the required charges +2/3 and −1/3. They have three possible color states because of possible permutations of their spatial u-quantum numbers. Moreover, we see that there are 3 and only 3 possible color states, since space is three-dimensional. Particle states are represented in Figure 3 by black dots and antiparticle states by white ones. The red and green triangles are equilateral, while leptons ( e , ν e ) and their antiparticles are represented by points that are situtated on the Q-axis.

We know that our Universe contains about 5 times more dark matter (DM) than ordinary matter, but the SM did not account for elementary DM particles, since they could not be detected at CERN or other accelerators. However, STQ implies that (ux, uy, uz) = (1, −1, 0) is also possible when uct = 0. This yields 6 possible permutations, corresponding to the vortexes of the blue hexagon in Figure 4. It is situated in the plane Q = 0, which is paralel to those of quarks and antiquarks in Figure 3.

We will show in the next section that elementary DM particles are analoguous

Figure 3. STQ accounts for common types of quarks and leptons.

Figure 4. Dark matter particle states correspond to the hexagon and its center. Excited states d* of d quarks and their antiparticles are also represented.

to u and d quarks, These DM particles can also be bound to one another by exchanging gluons. They are thus neutral quarks. For simplicity, we called them “narks”. They constitute various types of composite DM particles. We can even specify the composition of simple ones [7]. Gravitational effects were not sufficient to determine the nature and rest-masses of DM particles, but they are analogous to nucleons. Thus, we called them “neutralons”. Figure 4 shows that there are 3 types of narks (ne) and antinarks. The center of the hexagon corresponds to another nark (no) and its antiparticle. The corresponding dots coincide, like those of the electron neutrino (νe) and its antiparticle in Figure 3.

Figure 4 accounts also for excited state of d quarks, represented by the vortexes of the green triangle. These d* particles are of type (1, −1, −1) with three possible permutations. Their charge number Q = −1/3 as for d quarks. Their antiparticle states are also represented. We can even expect the existence of u* states, characterized by (0, 0, 2) with three possible permutations and Q = 2/3. The rest-energies of these particles are too high for producing them at CERN, but they might be discovered later on by increasing the energy of the colliding particles.

Color states of quarks and narks are specified by the convention of Figure 5. It corresponds to Figure 3 and Figure 4, when we look along the Q axis. The triangles for u and d quarks are then superposed and allow us to attribute identical (R, G, B) colors to both of them. Particles are represented again by black dots and antiparticles by white ones. They are opposite with respect to the center and have anticolors. The color states of ne narks are defined by associating a color with a different anticolor, to get for instance R G ¯ and R ¯ G for the antinark. The colorless no nark and antinark states can be viewed as resulting from two different superposions of R R ¯ , G G ¯ and B B ¯ color states.

2.4. Conservation Laws for u-Quantum Numbers

The construction of the SM resulted from conservation laws, characterizing

Figure 5. Definition of color and anticolor states of quarks and narks. The central nark and antinark states are colorless.

physical system that can be subjected to transformations without leaving any detectable trace. The conservation laws for the energy E and the components of the momentum vector p result from translations along the chosen reference axes or modifications of their origin. STQ accounts even for the invariance of the energy E = hc/2a for all inertial frames. In QM, the orbital angular momentum vector L and its component Lz do also yield conservation laws. Victor Weisskopf, who was general director of CERN between 1961 and 1965, regarded thus elementary particles as possible states that yield a spectroscopy, which is comparable to those of atomic and nuclear physics [8]. Nevertheless, it is different and STQ tell us why.

Figure 1 makes it even intuitively clear that transitions between different states are possible when they yield identical wave functions for the initial and final states. This condition is equivalent to a vector addition law. It applies also to the constitution of composite particles or their dissociation. Color states of fermions can be modified by interactions with bosons. They are characterized by the same u-quantum numbers, but are different because of their spin. We use thus round brackets for fermions and square brackets for bosons. The following relations account for the fact that quarks are bound to one another inside nucleons by exchanging gluons. This is also true for narks in neutralons.

( 0 , 0 , 1 ) + [ 1 , 0 , 1 ] = ( 1 , 0 , 0 ) or G + R G ¯ R

( 1 , 0 , 1 ) + [ 0 , 1 , 1 ] = ( 1 , 1 , 0 ) or R G ¯ + B ¯ G R B ¯

2.5. Dirac’s Concept of Antiparticles

Dirac wanted in 1928 to combine QM with SR in a new way, since Einstein’s relation (1) leads to the Gordon-Kein Equation (4). It contains a second-order time derivative, while Schrödingers’s equation involves only a first-order time derivative. The total probability that an electron is somewhere in space is then constant and equal to 1, but this is not true for the Gordon-Kein equation. Actually, this results from the fact that SR defines a finite rest-energy, allowing for creation and annihillation of electrons. However, the Gordon-Kein equation could be replaced by a set of first-oder differential equations. Dirac thought therefore that permanent existence should even be possible for relativistic electrons. Being profoundly interested in clarifying the basic principles of QM, Dirac had been impressed by Pauli’s elegant method to account for the spin of electrons. He adapted this theory to solve the problem of relativistic electrons.

Since Dirac’s theory had important consequences for elementary particle physics, we recall its essential ingredients. Electrons were initially assumed to be small spinning balls, but Pauli allowed them to be single points. The spin vector S of an electron had then to be considered as being analoguous to the angular momentum vector L = r × p . The vector product means that

L x = y p z z p y while L ^ x = y p ^ z z p ^ y

defines the corresponding operator. Since p ^ x ψ = i x ψ , the commutator

[ x , p ^ x ] = x p ^ x p ^ x x = i and [ L ^ x , L ^ y ] = i L ^ z

This implies that only one component of the angular momentum vector L is precisely measurable. The spin vector S of the electron has the same properties, but allows only for S z = ± / 2 . Pauli defined thus the operators for the 3 components of S by expressions like

S ^ x = 2 σ x and [ σ x , σ y ] = 2 i σ z

These conditions are satified by 2 × 2 matrices when

σ x = ( 0 1 1 0 ) , σ y = ( 0 i i 0 ) , σ z = ( 1 0 0 1 )

It follows that σ x 2 = 1 and σ x σ y + σ y σ x = 0 , for instance. Pauli published this theory in l927 and Dirac realized in 1928 [9] that the operator p ^ for the magnitude p of the momentum vector p is then

p ^ = σ x p ^ x + σ y p ^ z + σ z p ^ z and p ^ 2 = p ^ x 2 + p ^ x 2 + p ^ x 2 (10)

He could thus account for Einstein’s relation (1), when the energy operator is

E ^ = c k α k p ^ k + β E o and E ^ 2 = E 0 2 + c 2 k p ^ k 2 (11)

The index k = x, y, z and both conditions are satisfied when α k and β are 4 × 4 matrices, constructed by means of Pauli’s matrices and 2 × 2 unit and zero matrices, so that

α x = ( 0 σ x σ x 0 ) , α y = ( 0 i σ y i σ y 0 ) , α z = ( σ z 0 0 σ z ) and β = ( I 0 0 I )

These matrices can be applied to the wave function ψ = ( A ) e i ( p r ω t ) of a freely moving particle when (A) is a 4-component column matrix. It can be decomposed in 2-component (A±) matrices, accounting for spin up and spin down states, while the electron can have positive and negative energies. Dirac’s theory was severely criticized by Heisenberg and Pauli, since it implies that an electron could drop from its normal positive energy state to a negative energy state by emitting a photon. Its energy would then even drop to −∞. All electrons would thus be unstable, which is not true. Since Einstein’s relation (1) corresponds to a hyperbola, it allows for positive and negative energy states, but the non-relativistic approximation would then yield

E = ± E o ( 1 + ( p c ) 2 E o ) 1 / 2 = ± E o ( 1 + m o v 2 2 )

Only positive values are acceptable, since the kinetic energy is defined by the work of the force, which had to be applied to increase the velocity from 0 to v. It happens even quite often that a mathematically possible solution has to be rejected for physical reasons. Dirac saw the validity of these objections, but found a method to justify the existence of negative energy states. Since electrons are spin 1/2 particles, they are subjected to Pauli’s exclusion principle. Dirac proposed therefore in 1930 that all negative energy states are occupied by electrons. When one of these electrons is excited to acquire a positive energy, it creates a “hole” that behaves as if all remaining particles were equivalent to a single positive particle. This is known for semiconductors, where thermally excited electrons obey Fermi-Dirac statistics. Moreover, the existence of positive electrons was discovered in 1932.

Dirac was thus awarded in 1933 by the Nobel Prize in Physics for predicting and explaining the exitence of the positron. In his lecture [10], he stated that the theory of electrons and positrons is self-consistent and “fits the experimental facts so far as is yet known.” He did not exclude possible changes. They are even necessary, since Dirac solved one problem by creating bigger one: he assumed indeed the existence of a new aether, corresponding to an infinite number of electrons. They fill the whole “Dirac sea”, though it is bottomless when a = 0. This would even apply to all fermions, but is that necessary?

2.6. Antiparticles without Negative Energies

STQ accounts for antiparticles by means of the sign of all u-quantum numbers. Their existence was already known in continuum theories, but even there, they did not require negative energy states. To prepare the proof we note that Einstein’s Relation (1) can also be written in the following form:

E 2 E o 2 = ( E E o ) ( E + E o ) = p 2 when c = 1

It is then sufficient to use the 2 × 2 Pauli matrices and Dirac’s definition (10) and (11) of the operators E ^ and p ^ , to get two coupled equations:

( E ^ E o ) ψ + = p ^ ψ and ( E ^ + E o ) ψ = p ^ ψ +

They yield E = ± E o when p = 0, without needing 4 × 4 matrices, but (1) is also equivalent to

E 2 = ( E o + i p ) ( E o i p ) when c = 1

This expression yields two first-order differential equations:

i E ^ ψ = ( E o + i p ^ ) ψ * and i E ^ ψ * = ( E o i p ^ ) ψ (12)

There are thus two possible states, defined by

ψ = ( ) e i ( k x ω t ) and ψ * = ( ) e i ( k x ω t )

Taking the complex conjugate is equivalent to reversing the orientation of the x- and t-axes. This yields an antiparticle state. The 2-component spinors (↑) and (↓) indicate that when the orientation of the spin is well-defined for the particle state, time reversal does inverse its orientation. This results from defining the spin vector S as being analogous to the angular momentum vector L = r × p . Time inversion changes the sign of p and thus also of L and S.

The young Italian physicist Edittore Majorana expressed in 1932 strong opposition to Dirac’s concept of negative energies [11]. He belonged to Fermi’s group of physicists, but did not like to publish his brilliant ideas. Eventually, because of Fermi’s insistence, he accepted to publish his proof that the very notion of negative energy states can be avoided [12]. Majorana derived his equation from a very general variational principle, but it is equivalent to (12) and can thus be established in a more direct way.

2.7. Feynman’s Concept of Antiparticles and Space-Time

Richard Feynman admired Dirac’s work, but was struggling since 1947 with Dirac’s weird concept of antiparticles. His preference for concrete representations led him to use graphs to represent possible transformations of elementary particles. Since electrons have a finite rest-mass, they can only move at velocities v < c, but also backward in time. This is merely a matter of reference frames. Feynman said thus in 1986 that a positron is an electron that is moving backward in time [13]. Since this refers to time-inversion, he anticipated an essential result of STQ, though it was not yet known that a ≠ 0.

Feynman was also preoccupied by another fundamental problem. In quantum electro-dynamics, every electron is accompanied by virtual photons, which are constantly emitted and reabsorbed, but the energy of these virtual photons is h c / λ . When a bare electron is dressed by its cloud of virtual photons, its mass and electric charge become infinite when λ → 0. Virtual photons can even create virtual electron-positron pairs, aggravating the divergence problem. However, it is sufficient to replace the infinite mass and electric charge of the electron by the observed ones, to get finite values for other calculated quantities. This “renormalization procedure” was developed by several physicists, including Feynman. In his Nobel Prize lecture of 1965, he mentioned how he did proceed, but added that the renormalization does merely “sweep the difficulties under the rug” [14]. He had mentioned the reason one year earlier [15] in a public lecture, devoted to “seeking new laws”:

It turns out that it is possible to sweep the difficulties under the rug by a certain crude skill, and temporarily we are able to keep on calculating We are in some trouble In the past it has always turned out that some deeply held idea has to be thrown away I believe that the theory that space is continuous is wrong, because we get these infinities Here, of course, I am only making a hole, and not telling you what to substitute. If I did, I should finish this lecture with a new law The problem is not only what might be wrong but what, precisely, might be substituted in place of it Suppose that space consists of a series of dots, and that the space between them does not mean anything, and that the dots are in a cubic array. Then we can prove immediately that this is wrong.”

This statement locates the difficulty: it is not possible to assume the existence of a rigid space-time lattice. We defined therefore the quantum of length a in terms of restrictions imposed on possible results of measurements. Actually, it was already clear at the time of Lorentz that a “cut-off” would be needed for the spectrum of possible wavelengths [16]. However, this idea was strongly opposed, since such a cut-off would be incompatible with the Lorentz transformation. Lattice-theories have been developed, but only as a mathematical trick for calculating. STQ is fundamentally different.

2.8. Parity Non-Conservation for Weak Interactions

Weak interactions were discovered through beta decay. Lee and Yang analyzed this process in 1956 in terms of symmetry operators. Figure 6(a) recalls that the parity operator P inverses the orientation of the 3 spatial reference axes. The definition of the spin by a vector product implies then that the orientation of the spin (indicated in red) is inversed like the z-axis. However, the orientation of the spin can also be defined by considering small spinning balls. When we consider reflections by a mirror, Figure 6(b) shows that the z-axis can then be inversed without modifying the orientation of the spin. Does this mean that this orientation is not related to space and time?

Lee and Yang proposed to find out by an adequate experiment. It was performed in 1957 by Ms. Wu. She opted for beta decay of Co-60 nuclei, but a well-defined orientation of the spin of nuclei by means of a magnetic field requires a temperature of about 0.01 K. The measurements had thus to be performed at the National Bureau of Standards. It turned out that “a large beta

Figure 6. Does the parity operator modify the orientation of the spin vector along a given axis or not?

asymmetry was observed” [17]. It appeared even [18] that the probability distribution W(θ) for electron emission with respect to the initial orientation of the spin is

W ( θ ) = 1 B cos ( θ ) where B 1 / 3

Lee and Yang concluded that parity conservation is broken and that neutrinos should be characterized by two-component spinors [19]. They earned the Nobel Prize in Physics of 1957 “for their penetrating investigation of the so-called parity laws, which had led to important discoveries regarding the elementary particles”. It is noteworthy that Yang wondered in his Nobel Lecture “whether in the description of such phenomena the usual concept of space and time is adequate” [20]. Lee added that “hidden properties are usually revealed only through a fundamental change of our basic concepts” [21].

Parity non-conservation was rapidly confirmed for other systems, involving weak interactions, but nine years later, Lee wrote in the abstract of a review article [22]: “The more we learn about space inversion, time reversal and particle‐antiparticle conjugation, the less we seem to understand them… Still very little is known about the true nature of these discrete symmetries.” Forty years later, Lee insisted that “the concept of particles and antiparticle rests on the combined CPT invariance” [23]. He had been impressed by Pauli’s proof of the CPT theorem by means of the Lorentz transformation. This theorem revealed, indeed, that properties of elementary particles are somehow related to space and time. STQ explains this connection in a more direct and obvious way.

3. Extensions of the Standard Model

3.1. Different Generations of Fermions and Bosons

The initial version of the SM was represented in Figure 3. Since Figure 4 includes narks and some excited states, it corresponds already to an extension of the SM when uct = 0. Other types of elementary particles were also discovered and found to have the same family structure as those of Figure 3. They were thus said to belong to other generations, distinguished from one another by a property that was called “flavor”. This refers again to sensory perceptions, since the real cause was unknown. However, STQ yields 4 quantum numbers (ux, uy, uz, uct) for fermions and [ux, uy, uz, uct] for bosons, where u c t = 0 , ± 1 , ± 2 , . Since different generations were distinguishable from one another by greater rest-masses, it is sufficient to designate the first, second and third generation by | u c t | = 0 , 1 , 2 . This yields Table 1.

The first column specifies typical values of (ux, uy, uz), since they determine the charge number Q and the number of possible permutations, defining color states. The upper row uses a concise notation that specifies elementary particles by means of their charge number Q and their generation number | u c t | . The blue symbols designate elementary particles that are not yet known, but can be expected. We indicate only particle states, though there are always antiparticle states.

Table 1. Extended classification of spin 1/2 fermions.

Since charged leptons and antileptons of different generations were designated by the symbols e±, µ± and τ±, neutrinos got corresponding indexes. Up and down (u,d) quarks were associated with charmed and strange (c,s) or top and bottom (t,b) quarks. We expect thus also three possible generations of narks. They are designated by indexes as for neutrinos. Only the colorless no nark has no added index.

Table 2 accounts for spin-1 bosons. Photons are quanta of EM fields. W and Z bosons account for weak interactions, coupling quarks and narks to leptons. Colored and colorless gluons mediate strong interactions between quarks or narks. We add some types of spin-1 bosons that have not yet been produced and identified, but have also to be expected.

Table 3 provides experimentally determined rest-energies for different types of fermions and bosons. The SM did not account for different rest-masses, but their values can be strongly increased for higher generations of some types of elementary particles. The possible existence of “heavy photons” has not yet been excluded, but it is more probable that their rest-mass mo = 0. The rest-energy of gluons is very small [24], even for different generations. This allows for transformations by resonance phenomena, but the recently determined lower limit for the rest-energy of W τ ± bosons [25] is enormous.

Gravitons are quanta of gravitational fields. Since they are defined by the space-time metric, which yields a tensor, gravitons are spin-2 bosons, but their rest-energy is zero, as for photons. Higgs bosons are quanta of scalar fields. Their existence was predicted in 1964 by Higgs [26], Brout and Englert [27], Guralnik, Hagen and Kibble [28]. These bosons account for the rest-energies of elementary particles and are therefore so important that the next section presents at least the basic idea.

3.2. Spontaneous Symmetry Breaking

It is known in solid state physics that permanent magnets require the existence of an average magnetic field. It results from the orientation of neighboring magnetic dipoles. Collective oscillations of electrons with respect to positive charges are due to the resulting electric field. These facts suggested the existence

Table 2. Extended classification of spin 1 bosons.

Table 3. Measured values of rest-energies states.

of a scalar Higgs field, defined by its amplitude X, phase factor θ and potential energy V ( X ) :

ϕ = X e i θ while V ( X ) = μ 2 X 2 + λ X 4

If µ2 were positive, this would simply be the potential of a harmonic oscillator, perturbed by nonlinear effects for greater amplitudes of oscillation. For Higgs bosons, V(X) displays a minimum when

X 2 = μ 2 2 λ = v 2 since μ 2 < 0 and λ > 0 ,

The Higgs field will thus tend to be in this ground state, but allows also for excited states when

X = v + h and V ( h ) = V o + λ v 2 h 2

Since this is the potential energy of a harmonic oscillator, it accounts for quantized excitations and the rest-mass of Higgs bosons. In a similar way, it is possible to determine the masses of W± and Z bosons, when they interact with the Higgs field by means specific coupling constants. The existence of Higgs bosons was experimentally confirmed in 2012 at CERN. Higgs and Englert received thus in 2013 the Nobel Prize “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles”. Brout deceased in 2011 and the article of Higgs was published first. Since Higgs bosons could be produced by mutual annihilation of t ¯ t ¯ pairs, they are characterized by [0, 0, 0, 0], but their rest-energy is 124 GeV.

3.3. The Hypothesis of Sub-Elementary Particles

Since atoms, molecules, nuclei and nucleons are composed of smaller entities, Harari assumed in 1979 that this could also happen for quarks. His terminology was derived from the Tora [29], where the creation of the Universe was presented as being grogressive (gen.1.2). The first step led to the existence of Tohu andVuhu, interpreted as being something that is formless, besides nothingness. Actually, Harari assumed the existence of two types of primeval particles, designated by the letters T and V. However, they are spin 1/2 fermions with 3 possible “hypercolors” and thus very similar to u and d quarks. Only their charges are different: +1/3 for T and 0 for V. Since T ¯ antiparticles would carry the charge −1/3, it was possible to propose a new classification of known elementary particles of the first generation. A similar model was proposed by Shupe [30].

Robson generalized this model in 2012, to account for the existence of 3 generations [31]. It was sufficient to add one particle, called U. He compared the progress to replacing Plolemaic epicycles by the heliocentric model [32]. However, when Harari presented his “search of the ultimate constituents of matter” in 1983, he insisted on the fact that the proposed model is only a conjecture [33]. He added that the correct theory could emerge from “some totally new idea. In the words of Niels Bohr, it may be that our present ideas are not sufficiently crazy to be correct.” This applies quite well to STQ, since space-time was firmly believed to be continuous and it is very difficult to modify deeply rooted convictions. However, new and more detailed experimental results could impose it.

4. Anomalies and Puzzles

4.1. The B Meson Decay

Figure 7 shows that the decay of B into K mesons implies the transformation b s l + l , where the lepton l = e , µ or τ. The process raises a problem, since the change of generation number requires a [0, 1] bosons, but the creation of the lepton-antilepton pairs needs [0, 0] bosons. The problem is even aggravated by the fact that the SM predicts “lepton-flavor universality”. This means that the three types of lepton-antilepton pairs should be created with equal probabilities, but the measured values are different. This appeared in 2018 and was confirmed in 2020, after an “intense study” concerning different methods to determine probability ratios. The difference is small, but well established. This result was presented as evidence for CP symmetry breaking [34].

Figure 7. The decay of B mesons leads to a problem.

Figure 8. Proposed solution of the problem of B meson decays.

STQ offers an explanation, presented in Figure 8. It follows from Table 2 that the [0, 1] boson could be a Zµ or g boson, allowing for [ 0 , 1 ] = ( 0 , 1 ) + [ 0 , 0 ] . The (0, 1) fermion could be a νµ neutrino or a n nark. The required [0, 0] boson would then produce different types of lepton-antilepton pairs, but νµ neutrinos and n narks have different rest-energies. The global energy and momentum conservation laws will thus yield different mixing ratios and probabilities for the observed decays. Perhaps it is even possible to estimate the rest-energy of the n nark by analyzing available or future data at CERN.

4.2. Direct Detection of Galactic DM Particles

Evidence of the existence of DM particles resulted from astrophysical observations. They were initially thought to be heavy neutrinos [35]. Since that was not confirmed, they were called weakly interacting massive particles(WIMPs). This terminology was fuzzy and suggested relatively great rest-masses, though their possible values are unknown. Searching for astrophysical data, we became aware of direct detection of galactic DM particles by means of thallium activated NaI scintillators [36]. This method had been adopted by the DAMA (dark matter) collaboration in Italy, performing measurements at about 1400 m below the surface of the Gran Sasso Mountain.

Their experiment led already in 1996 to very remarkable results, since the detection of galactic DM particles was confirmed by its annual modulation. It results from the fact that the Earth is orbiting around the Sun, which is moving around the galactic center. The velocity of the Earth with respect to the galactic DM halo varies thus between 230 ± 30 km/s. The flux of intercepted DM particles should even vary like a cosine function with a period of one year. Moreover, the probability of detection should be maximal at about June 2 and minimal at about December 2. The detected modulation satisfied both conditions during 7 cycles. Passage of DM particles through anyone of the 25 adjacent cylindrical scintillators was detected by means of two photomultiplier tubes (PMTs), situated at opposite sides. They had to respond in coincidence, but in anticoincidence with neighboring ones to reduce the chance of spurious signals.

In 2008, the DAMA/LIBRA team published new results, obtained by significant improvements of their equipment [37]. The initial measurements had been performed with 87.3 kg of NaI:Tl crystals. They were replaced by new ones,

increasing the total mass of scintillating detectors by 266%. The PMTs had also been improved, but the annual modulation remained coherent with the previously detected ones. The 12 cycles raised together the statistical confidence level to about 9 σ. Nevertheless, other teams continued to doubt what they called a “claim”, since they had performed similar measurements, without clear evidence of annual modulations. This is also an anomaly, calling for an explanation.

The basic problem resulted from the fact that it was generally believed that WIMPs should be detected by producing nuclear recoils. Since iodine nuclei contain 53 protons and 74 neutrons, while sodium nuclei contain only 11 protons and 12 neutrons, these atoms do not react in the same way [38]. The calculated chance to detect DM particles would be maximal for iodine, if WIMPs had a rest-energy of about 10 GeV and for sodium at about 70 GeV. Possible interactions with electrons were not considered, though the DAMA/LIBRA collaboration had found that the intercepted DM particles acted on NaI:Tl scintillators like gamma rays of about 2 keV. They create energetic electrons, exciting electrons from the conduction band to the valance band of NaI crystals. Many excited electrons are then trapped in Tl luminescence centers and produce a pulse of photons. Since the efficiency of DM detection was found to increase for lower excitation energies, it was decided to reduce the threshold.

New results [39], published in 2019, were obtained by means of improved electronics, highly radio-pure NaI:Tl crystals and better PMTs. However, the same annual variations were found, covering now 14 cycles. The confidence level was raised to 12.9 σ. After publication of these results, it was recognized that the amplitude of the annual modulation might be detectable, but the belief in nuclear recoils was not abandoned [40]. It was merely stated that the COSINE-110 experiment, installed in an underground laboratory in South Korea, should “allow for a powerful test of the WIMP dark matter hypothesis.” This experiment does also use NaI:Tl crystals. Their total mass is only 106 kg, but they are protected by a surrounding liquid scintillator and anticoincidence. First results, presented in 2016, after 1.7 years of data collection, did not exclude an annual modulation [41]. Moreover, it was announced that the threshold of detection would be lowered to improve the DAMA measurements.

These facts raise an experimental and a theoretical problem. Since direct detection of galactic DM particles by means of scintillators leads to stronger effects for low excitation energies of electrons, it is favored for NaI:Tl with respect CsI:Tl scintillators [35]. The width of the forbidden band is indeed 5.8 eV for NaI and 6.3 eV for CsI. We recommend therefore to use LuI3:Ce scintillators. They have “surprisingly good characteristics” for gamma and X-ray detection, rapid initial decay and a spectrum for the emitted light that is favorable for usual PMTs. Their bandgap is merely 4.5 eV [42].

The theoretical problem concerns the mechanism of DM detection by means of scintillators. Is it due to excitation of electrons or to nuclear recoils? STQ can help in this regard, since Figure 9 shows that narks of the first generation can

Figure 9. Two possible interactions of narks.

interact with electrons by exchanging Z bosons, but also with quarks inside nucleons by exchanging go gluons. They have to be colorless, since constant color changes of u and d quarks inside nucleons cannot be perturbed by interacting also with colored gluons of external origin. Nuclear recoils are thus not excluded, but electronic excitations are sufficient for direct detection of DM. Moreover, the required colorless narks no are present in galactic DM [7].

Another system for direct detection of DM particles is also installed in the Gran Sasso Mountain. It consists of 3.2 tons of very pure, super-cooled Xenon, acting as a liquid scintillator. DM particles produce there light flashes that are immediately detected by high-quality PMTs at the bottom of the cylindrical vessel. Other PMTs, installed above the liquid, detect liberated electrons or ions that are pushed upwards by an electric field. They can thus discriminate between nuclear recoils and electronic excitations. The big liquid scintillator is surrounded by ultrapure water to reduce spurious signals. This system is operational since 2016 and first results, presented in 2020, indicated an 18% excess of electron detection [43]. Direct detection of galactic DM particles is so important that a Large Underground Xenon (LUX) system has been installed in a South Dakota mine (USA). It confirmed the presence of electronic signals [44]. A 6 ton Zenon scintillator (PandaX) in China [45] and a 7 ton one (LUX-ZEPLIN) in the USA [46] will soon become operational.

4.3. Unexpected Decays of Nuclear Excited States

Krasznahorkay and his team in Hungary analyzed the decay of an excited state of Beryllium nuclei, resulting from 7Li(p,γ)8Be reactions. It was known that this leads to γ emission with internal pair creation, but angular correlation measurements revealed that the electron-positron pair can form a greater angle. The [0,1] boson, which is required in Figure 10 is electrically neutral and its rest-energy could be determined by energy and momentum conservation. It was found to be close to 17 MeV. The mysterious boson was thus called “X17” and the results were published in 2015 [47]. They led to debates and skepticism.

However, Krasznahorkay and his collaborators could confirm the reality of this boson by means of 3He(p,γ)4He reactions [48]. The excited alpha particle produces an electron-positron pair, but angular correlation measurements did yield another peak. Dynamical analysis proved that the required [0, 0] boson is identical to the X17 boson. A review article [49] examined several hypothetical extensions of the SM, but it is difficult to guess what the mysterious X17 boson

Figure 10. Two possible nuclear processes.

might be without any theoretical guide.

According to STQ, the [0, 0] boson could be a Z or a go particle. They can create an undetected νe or no nark, accompanied by a photon, which creates the observed electron-positron pair. The global energy-momentum balance would then be modified, of course. It was not expected that DM particles might even be involved in nuclear physics at relatively low energies. If there does really appear a nark, it can perhaps be detected by means of LuI3:Ce scintillators.

4.4. Dark Matter Signals Emerging from the Earth

The Ice Cube Neutrino Observatory was installed by the NSF in Antarctica and is operating since 2008. This detector consists of 60 digital optical modules, deployed on strings in 1 km3 of transparent ice. The depth ranges from 1.4 to 2.4 km. This observatory detects energetic ντneutrinos, coming from very far. They might help to solve problems concerning the Big Bang. Actually,

ν τ τ W + and W + e + ν e

The τ lepton produces Cerenkov radiation in the crystal-clear ice and is detected by the PMTs of the Ice Cube. Since cosmic ν τ neutrinos cannot traverse dense matter inside the Earth, they have to arrive at grazing incidence or from above. To determine the ratio, it was decided to detect them by means of an array of microwave antennas. They are carried by a balloon, meandering during several weeks around the South Pole at an altitude of about 37 km [50]. When cosmic neutrinos are detected by this Antarctic Impulsive Transient Antenna (ANITA) and then inside the Ice Cube, they come from above.

However, data collected during the first 3 years [51] revealed that some ν τ neutrinos did come from below [52]. This should not be possible and was very puzzling. Eventually the idea emerged that DM particles might be involved. STQ is more specific, since n o τ g 0 ν τ . The detected ν τ neutrinos could thus be created inside the Earth, but only there and not in other astronomical objects, because of the very short lifetime (4 × 10−13 s) of ν τ neutrinos.

4.5. The Anomalous Magnetic Moment of Muons

The gyromagnetic ratio of electrons is g e = 2 ( 1 + α e ) , where the small correction αe results from virtual photons. This could be proven by quantum electrodynamics, but the measured value g μ = 2 ( 1 + α μ ) for µ mesons was too great to be attributed only to virtual photons. This was established in 2004 by the E821 experiment at the Brookhaven National Laboratory [53]. It requires that the boson in Figure 11 is not merely a virtual photon.

Figure 11. Contributions to the muon magnetic moment.

Comprehensive calculations have recently been performed by the Muon g-2 Theory Initiative [54]. The result provides the best possible theoretical evaluation of the muon magnetic moment, based on the SM in its present form. However, high precision measurements, recently performed at Fermi Lab [55], confirm the existence of a discrepancy. The difference is small, but established with a statistical significance of 4.2σ. This fact increases the so-called “tension” between measurements and SM predictions. The precision attained now, as well on the experimental as on the theoretical side, is very advantageous for the search of new physics. According to STQ, the boson in Figure 11 could be a photon, but also aZ or Zµ boson and a g 0 or g o μ gluon. They can even produce virtual nark-antinark pairs, which were not considered.

4.6. Anomalous Do Meson Decays

Figure 12 shows that Do mesons have two possible decay channels: D o π + π and D o K + K . This means that c u ¯ u d ¯ + d u ¯ or c u ¯ u s ¯ + s u ¯ . In both cases, the c quark is converted into a u quark, but experimental results and their analysis, published in 2012, revealed that creation of π pairs is more probable than creation of K pairs [56]. This was confirmed in 2019 and suggested that CP symmetry is broken [57]. STQ implies that ( Q , 1 ) = ( Q , 0 ) + [ 0 , 1 ] , where the [0, 1] boson allows for Z μ ν ¯ µ ν e or g o μ n ¯ o μ n o . These processes allow for a mixture of two possibilities, but the rest-energies of neutrinos and narks are different. The energy-momentum balance and the relative probabilities would then be modified, of course.

Figure 13 shows two other examples of apparent CP violation. These processes involve Z bosons or colorless gluons of different generations. It is again necessary to include narks, but the SM ignored their existence. Perhaps it is even possible to estimate their rest-mass by further analysis of already collected data at CERN or continued measurements. This prospect deserves attention; it would constitute a breakthrough.

4.7. The Matter-Antimatter Asymmetry

The Big Bang produced pairs of matter and antimatter particles, but all these particles should have annihilated one another. There would merely remain radiation. Obviously, that is not true. Only one antiparticle survived for about a billion matter particles [58]. Sakharov stated in 1967 that the prevalence of matter

Figure 12. Two different decay modes of Do mesons.

Figure 13. Two decay modes of B mesons.

over antimatter in our Universe could be attributed to the disruption of the initial thermal equilibrium and/or to violation of CP symmetry [59]. Nevertheless, this asymmetry remains unexplained and constitutes even a major mystery for cosmology and elementary particle physics. Processes like

u ¯ d ¯ e u or e + u u d

are possible, but not sufficient. All conservation laws are symmetric, indeed, even for decreasing generation numbers. Another process had thus to perturb this reversibility. It has to be related somehow to time inversion, since cosmic evolution implies an “arrow of time”. However, we tend to believe that on the average, everything was always like it is now. This happened even to Albert Einstein, who created the theory of general relativity (GR).

He had realized that effects of gravitational forces are identical to those of accelerated reference frames. This allowed him to develop a radically new theory of gravity, where Newton’s concept of a force, acting at a distance, was replaced by a field that transmits this action. This field was defined by the metric of space and time. Though space and time were assumed to be continuous, it was necessary to define the square of small space-time intervals ds in terms of possible results of measurements. The resulting theory of GR related this metric to local mass distributions. Usual graphical representations of this fact can suggest that it results from a property of the fabric of space and time, but is only due to a restriction that Nature imposes on space-time measurements in the presence of masses. This does not require that the quantum of length a ceases to be a universal constant, since space-time measurements allow for a curvature of geodesics by a juxtaposition of more quanta of length than in flat space.

Einstein applied this theory to the whole Universe. Since it contains huge quantities of particles that have masses, attracting one another, the Universe would necessarily collapse, unless gravitational forces are opposed by repelling ones. Einstein introduced thus a cosmological constant Λ that opposes the effects of Newton’s gravitational constant G, but he assumed that the value of Λ is precisely tuned to insure stability of our Universe. Georges Lemaître realized that this hypothesis contradicts the fact that complex physical systems are subjected to irreversible changes. Molecules of a perfume, for instance, get more and more dispersed in air, because of random collisions. They cannot be expected to bring these molecules back to their initial, concentrated state. This is true for any physical system, where the number N of possible states is very great and led to defining the state of complex systems by means of its entropy S = logN. Statistically, the variations dS ≥ 0. Even if the number of particles in our Universe were constant, they would tend to occupy the greatest possible volume.

The Universe should thus be expanding and this process had to begin some finite time ago. Lemaître developed in 1927 a theory, where the values of G and Λ allowed for three different periods of expansion [60]. There had to be an initial velocity, but it decreased because of gravity, until increasing effects of Λ became sufficiently strong to lead to slow expansion during a limited time. However, it would be followed by constantly accelerated expansion. Lemaître knew about astronomical measurements that allowed him to evaluate the present rate of expansion. It was close to Hubble’s law, published two years later.

Hubble had very carefully measured the red-shift for receding galaxies in our neighborhood, but interpreted it as a Doppler Effect without explaining its cause. Lemaître related it to the expansion of space, resulting from his generalization of Einstein’s theory of gravity. The concept of an initial “Big Bang” seemed to be unbelievable and was ridiculed, but confirmed in 1965 by the discovery of cosmic microwave radiation. An accelerated expansion of our Universe was even more unbelievable, but proven in 1998 by supernova observations. Lemaître’s scientific achievements have been described by the cosmologist Jean-Pierre Luminet [61].

The matter-antimatter asymmetry in our present Universe has to result from the cosmological arrow of time, but requires also nonlinear processes. This fact can be illustrated by the phenomenon of ball lightning.

Sakharov noted that the very luminous plasma ball created by nuclear explosions is rapidly extinguished and wondered why the lifetime of ball lightning is much longer. This phenomenon requires a special mechanism. It results from the fact that ions and free electrons are confined in a spherical membrane. This allows, indeed, for radial oscillations of the electrons with respect to the heavier ions and thus for alternative attraction of electron and ions that are present in the ambient medium [62]. Losses of charged particles by recombination and also energy losses by light emission are compensated. Ball lightning is even spontaneously attracted towards higher densities of charged particles in the ambient medium. It can thus move around to “feed” itself, like a living organism. However, the essential point is that the variation of the density of charged particles inside the membrane is regulated by nonlinear equations. They account for irreversibility and the existence of an arrow of time. The life time is much longer than for random processes, but limited. The luminous ball will eventually disappear by silent extinction or by an explosion that can even be very violent. This depends merely on the ionization density in the ambient medium.

The matter-antimatter symmetry in our Universe is not simply due to the arrow of time, defined by its expansion. It requires also nonlinear processes. It is highly probable that they occurred only during the initial extremely rapid inflation period. It implies indeed sudden and gigantic increase of the number of particles and antiparticles. This initiated not only the cosmic expansion, but led also to an enormous density of particles and antiparticles. The usual conservation laws were modified, since basic transformations processes involved more than 3 particles. There were trident processes, for instance. The detailed mechanism has still to be elucidated, by identifying the relevant multi-particle processes. How could reaction kinetics, involving nonlinear processes, favor the survival of more particles than antiparticles? This problem remains unsolved, but its reformulation does already reduce the mystery.

4.8. Quantum Gravity

Since the theories of GR and QM are both valid, it should be possible to combine them. Lemaître recognized already that the expansion of our Universe had to begin with a quantum effect. Since the lowest possible entropy (S = 0) corresponds to N = 1, our Universe had initially to be a single particle. Lemaître called it the “primeval atom”. This word may seem inappropriate, because of historical connotations, but Lemaître meant only that quantum effects had to be involved [63]. Since the theory of STQ led to the concept of a highest possible energy hc/2a, we can attribute it to a single photon, confined in the smallest possible sphere of radius a.

Trying to represent physical processes as simply as possible, we considered that our expanding Universe is (on the average) a hypersphere of radius R in a four-dimensional space [64]. Since the surface of this hypersphere constitutes the familiar 3-dimensional space and since R is increasing, this space is expanding. It did not start, however, as a single point. We ignore why this photon did exist and led to the Big Bang, but we know that only about 5% of the total energy content of our Universe is due to ordinary matter and antimatter. About 25% corresponds to DM particles and about 70% to dark energy (DE). Its nature is unknown, but it has to be the energy that is driving the accelerated cosmic expansion. We attributed it to a transformation of DM particles [61]. These processes would thus also contribute to the arrow of time at cosmic scales.

Quantum gravity is still relevant today, since it accounts for the occasional creation of gravitational waves. Einstein could predict their existence in the sense of propagating ripples of the space-time metric, but their detection required sophisticated laser interferometry. It was announced in 2016 that it succeded and that the emission of gravitational waves resulted from the collapse of binary systems [65]. The theoretical treatment in the framework of the conventional theory of GR raises difficult problems [66] [67], but they should not clutter our view of the underlying quantum-mechanical processes. This can be shown in a simple way, by transposing Bohr’s semi-classical model. Two equal masses M that attract one another by Newton’s gravitational force are orbiting around their common mass center. For a circular trajectory of radius r, dynamical equilibrium and the quantization rule require that

G M 2 ( 2 r ) 2 = M V 2 r and 2 π r = n λ = n h M V

The orbital velocity V can be so great that it is necessary to account for Einstein’s relation (1). Since the total energy of this system includes also their potential energy:

E = 2 E o ( 1 + β 2 ) 1 / 2 + U where U = G M 2 2 r = 2 β 2 E o

and β = V c = Γ n when Γ = G M 2 4 c

The energy E is progressively reduced, because of successive quantum-mech- anical transitions, where the quantum number n is decreased by one unit. The energy of the resulting gravitons is then

h ν = E o ( β n 1 2 β n 2 ) + E o ( β n 1 4 β n 4 ) / 4 +

Since β n 1 2 ± β n 2 = Γ 2 n 2 ( 1 + 2 x 2 + 3 x 4 + ± 1 ) where x = 1 n 0

h ν = E o [ Γ 2 x 2 ( 2 x 2 + 3 x 4 + ) + 4 Γ 4 x 6 + ]

This model yields only an approximation, but shows that the frequency increases when n decreases and produces the typical “chirp”.

5. Summary and Conclusions

At the outset, we wanted only to find out if there could exist an ultimate limit for the smallest measurable distance. Instead of assuming that space-time has a crystal-like lattice structure, we required that the value a of this quantum of length has to be a universal constant for inertial frames, like c and h. It was tempting, of course, to assume that a is a combination of already known universal constant. It is even customary to consider c, h and G. They yield the Planck length l P = 1.616 × 10 35 m , but this choice would arbitrarilly favor a particular type of interactions. We left thus the value of a undetermined. We had then to construct a theory of STQ, to see if it leads to logical inconsistencies when a is finite. It turned out that we have to change some familiar ideas, as this happened already for SR and QM, but continuum theories can be generalized. When a ≠ 0, the highest possible energy of individual particles would be hc/2a. That is acceptable, since the Lorentz transformation has merely to be generalized to account for the invariance of c, h and a.

The foundations of physics could thus be enlarged by using 3 pillars instead of two, but the logical consistency of STQ did not yet prove that the length a is finite in our Universe. We had thus to confront STQ to reality, by considering results of unexplained measurements. This applied to elementary particle physics, proving the existence of quantum numbers, without elucidating their physical origin and actual meaning. Since elementary particles are single points, we thought that these quantum numbers could describe properties of their wave functions at extremely small scales in space and time. These functions or fields can only be defined, indeed, for those points where elementary particles could be localized by means of ideally precise measurements. However, we were surprized when we discovered that the existence of a finite quantum of length implies that there are two intercalated lattices for every space-time coordinate in any arbitrarily chosen inertial frame.

This was the key that opened the black box of elementary particle physics. An enormous effort, requiring sophisiticated instruments and great theoretical perspicacity, had revealed that there is something in this box. It behaves in a remarkable way, but the content remained hidden. Suddenly, it seemed to become mentally transparent, since it was apparently necessary to modify some ideas to see what is in this box. Compared to complicated and rather speculative attempts to understand the messages of Nature that we were able to receive, STQ is quite simple. It is in conformity with Occam’s razor, requiring parsimony. It accounts also for elementary DM particles. This kind of matter was known to exist, but escaped closer scrutiny. Even the puzzle of the recently discovered “anomalies”, which could not be explained by the SM in its present form can now be solved.

Further theoretical and experimental investigations are necessary, especially to determine the values of coupling constants that are required to perform calculations. Their results will also have to be tested by other measurements, but this interlacing of two complementary types of investigations is essential for scientific progress.

Basically, we are realizing that physics is not directly concerned with reality, but with possible knowledge about reality. That is different, since this knowledge results from measurements that can be subjected to universal restrictions. Though this fact was already indicated by the development of SR and QM, we were not yet sufficiently aware of its fundamental importance. It appears even that these restrictions do always involve space and time. In SR and QM, they concerned combinations of positions, velocities, energies and masses, while STQ shows that restrictions are even imposed on measurement of space-time coordinates alone. This is confirmed by elementary particle physics, astrophysics and Big Bang processes.

Acknowledgements

The author wishes to express his gratitude to his teacher Georges Lemaître. He dared to look beyond frontiers and to construct something new with rigor and steadiness. He loved to laugh, but abhorred arbitrariness. In regard to Fred Hoyle, who attacked him to promote the concept of continuous creation, he merely said that imagination has to be controlled.

We want also to thank Werner Heisenberg. We met him in 1973 in Brussels [68] and he kindly accepted to evaluate all articles that we had published concerning the logical possibility of STQ. He was known to be critical, but aware of the fact that Nature can impose restrictions on measurements. He had even considered himself the possible existence of an “elementary length”. His approach was different, but he answered after some time without raising any objection. On the contrary, he advised to search for contact with reality. At that time, we had already started to examine elementary particle physics, but are very grateful for his encouraging advice.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Hobbs, B. (2017) The Standard Model of Particle Physics Is Brilliant and Completely Flawed. ABC.
https://www.abc.net.au/news/science/2017-07-15/the-standard-model-of-particle-physics-explained/7670338#
[2] Kuhn, T.S. (1962) The Structure of Scientific Revolutions. 2nd Edition, University Chicago Press, Chicago.
https://folk.ntnu.no/krill/bioko-references/Kuhn%201962.pdf
[3] Voigt, W. (1887) über das Doppler’sche Prinzip, Nachr. Königl. Gesellschaft Göttingen.
https://en.wikisource.org/wiki/Translation:On_the_Principle_of_Doppler
[4] Lorentz, H.A. (1904) Proceedings of the Academy of Sciences Amsterdam, 6, 809-831.
[5] Meessen, A. (1978) Foundations of Physics, 8, 399-415.
http://www.meessen.net/AMeessen/STQ1978.pdf
https://doi.org/10.1007/BF00708571
[6] Meessen, A. (1999) Foundations of Physics, 29, 281-316.
http://www.meessen.net/AMeessen/STQ/STQ.pdf
https://doi.org/10.1023/A:1018829823687
[7] Meessen, A. (2011) Journal of Modern Physics, 8, 35-56.
http://file.scirp.org/pdf/JMP_2017011015364095.pdf
https://doi.org/10.4236/jmp.2017.81004
[8] Weisskopf, V.F. (1968) Scientific American, 218, 15-29.
https://doi.org/10.1038/scientificamerican0568-15
[9] Dirac, P.M.A. (1928) Proceedings of the Royal Society A, Part I, 117, 610-624.
https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1928.0023
https://doi.org/10.1098/rspa.1928.0023
[10] Dirac, P.A.M. (1933).
https://www.nobelprize.org/uploads/2018/06/dirac-lecture.pdf
[11] Majorana, E. (1932) Nuovo Cilento, 9, 335-344.
https://doi.org/10.1007/BF02959557
[12] Majorana, E. (1937) Translation of Il Nuovo Cimento, 14, 171-184.
http://www.physics.umanitoba.ca/u/tapash/Majorana_1937.pdf
https://doi.org/10.1007/BF02961314
[13] Feynman, R. (1988) The Reason for Antiparticles, Dirac Memorial Lecture. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9781107590076.002
https://www.youtube.com/watch?v=MDZaM-Bi-kI
http://www.nucleares.unam.mx/~alberto/apuntes/feynman.pdf
[14] Feynman, R. (1965) Nobel Prize Lecture.
https://www.nobelprize.org/prizes/physics/1965/feynman/lecture
[15] Feynman, R. (1964) The Character of Physical Law, Part 7. Penguin Press Science.
https://www.youtube.com/watch?v=-2NnquxdWFk
[16] Huang, K. (2013) A Critical History of Renormalization.
https://arxiv.org/ftp/arxiv/papers/1310/1310.5533.pdf
[17] Wu, C.S., et al. (1957) Physical Review, 105, 1413-1415.
https://doi.org/10.1103/PhysRev.105.1413
[18] Wu, C.S. (2008) Lecture Notes in Physics, 746, 43-69.
https://inspirehep.net/files/379628938c84966afd7bf0849baefb09
[19] Lee, T.D. and Yang, C.N. (1957) Physical Review, 105, 1671.
https://pdfs.semanticscholar.org/fa77/6334905b693c0c45b082d6b45d60c0de5287.pdf
[20] Yang, C.N. (1957) The Law of Parity Conservation and Other Symmetry Laws in Physics. Nobel Lecture.
https://www.nobelprize.org/uploads/2018/06/yang-lecture.pdf
[21] Lee, T.D. (1957) Weak Interaction and Nonconservation of Parity. Nobel Lecture.
https://www.nobelprize.org/uploads/2018/06/lee-lecture.pdf
[22] Lee, T.D. (1966) Physics Today, 19, 23-31.
https://doi.org/10.1063/1.3048099
[23] Lee, T.D. (2006) New Insights to Old Problems.
https://arxiv.org/abs/hep-ph/0605017
[24] Yndurain, F.J. (1995) Physics Letters B, 345, 524-526.
https://doi.org/10.1016/0370-2693(94)01677-5
[25] CSM Colab. CERN (2019) Search for a W’ Boson Decaying to a τ Lepton and a Neutrino in Proton-Proton Collisions at √ s = 13 TeV.
https://arxiv.org/pdf/1807.11421.pdf
[26] Higgs, P.W. (1964) Physical Review Letters, 12, 132-133.
https://doi.org/10.1016/0031-9163(64)91136-9
[27] Englert, F. and Brout, R. (1964) Physical Review Letters, 13, 321-323.
https://doi.org/10.1103/PhysRevLett.13.321
[28] Guralnik, G.S., et al. (1964) Physical Review Letters, 13, 585-587.
https://doi.org/10.1103/PhysRevLett.13.585
[29] Harari, H. (1979) Physics Letters B, 86, 83-86.
https://inspirehep.net/files/ba8aff83b7524d346172cc5801dd9046
https://doi.org/10.1016/0370-2693(79)90626-9
[30] Shupe, M.A. (1979) Physics Letters B, 86, 87-92.
https://doi.org/10.1016/0370-2693(79)90627-0
[31] Robson, B. (2012) The Generation Model of Particle Physics. In: Kennedy, E., Ed., Particle Physics, InTech, Rijeka, 1-28.
https://doi.org/10.5772/35071
http://www.issp.ac.ru/ebooks/books/open/Particle_Physics.pdf
[32] Robson, B. (2013) Advances in High Energy Physics, 2013, Article ID: 341738.
http://downloads.hindawi.com/journals/ahep/2013/341738.pdf
https://doi.org/10.1155/2013/341738
[33] Harari, H. (1983) The Structure of Quarks and Leptons. Scientific American, 56-67.
https://doi.org/10.1038/scientificamerican0483-56
[34] LHCb Collab. CERN (2020) Measurement of CP-Averaged Observables in the B0 → K*0μ+μ- Decay.
https://cds.cern.ch/record/2712641/files/LHCb_PAPER_2020_002.pdf
[35] Lee, B.W. and Weinberg, S. (1977) Physical Review Letters, 19, 165-168.
https://pdfs.semanticscholar.org/216a/42e3c22075dee51caa0e58bcf185a4e8e07f.pdf
[36] Meessen, A. (2017) Journal of Modern Physics, 8, 268-298.
https://www.scirp.org/pdf/JMP_2017022811385764.pdf
https://doi.org/10.4236/jmp.2017.82018
[37] Bernabei, R., et al. (2008) The European Physical Journal C, 56, 333-355.
https://inspirehep.net/files/5ea3e04c7f9a1617afc488dd4f20fcef
https://doi.org/10.1140/epjc/s10052-008-0662-y
[38] Kelso, C., et al. (2013) Lowering the Threshold in the DAMA Dark Matter Search.
https://arxiv.org/abs/1306.1858
https://doi.org/10.1088/1475-7516/2013/09/022
[39] Bernabei, R., et al. (2019) Nuclear Physics and Atomic Energy, 19, 307-325.
https://arxiv.org/abs/1805.10486
https://doi.org/10.15407/jnpae2018.04.307
[40] Baum, S., et al. (2019) Physics Letters B, 789, 262-269.
https://arxiv.org/pdf/1804.01231.pdf
https://doi.org/10.1016/j.physletb.2018.12.036
[41] Lee, H.S. (2019) Dark Matter Search with NaI (Tl) Crystals COSINE-100 Experiment.
https://www.lowbg.org/ugnd/workshop/sympo_all/201903_Sendai/slides/8am/8am_4.pdf
[42] Dorenbos, P. (2010) IEEE Transactions on Nuclear Science, 57, 1162-1167.
https://doi.org/10.1109/TNS.2009.2031140
[43] Aprile, E., et al. (2020) Physical Review D, 102, Article ID: 072004.
https://arxiv.org/pdf/2006.09721.pd
[44] Hamish, J. (2016) World’s Most Sensitive Dark-Matter Search Comes up Empty Handed. Physics World.
[45] PandaX.
https://en.wikipedia.org/wiki/PandaX
[46] LUX-ZEPLIN.
https://www.sanfordlab.org/experiment/lux-zeplin
[47] Krasznahorkay, A.J., et al. (2015) Physical Review Letters, 116, Article ID: 042501.
https://arxiv.org/abs/1504.01527
[48] Krasznahorkay, A.J., et al.(2019) New Evidence Supporting the Existence of the Hypothetic X17 Particle.
https://arxiv.org/pdf/1910.10459.pdf
https://doi.org/10.1103/PhysRevLett.116.042501
[49] Rose, L.D., et al. (2019) Frontiers in Physics, 7, 73.
https://www.frontiersin.org/articles/10.3389/fphy.2019.00073/full
[50] Hoover, S. (2007) Journal of Physics: Conference Series, 81, Article ID: 012009.
https://iopscience.iop.org/article/10.1088/1742-6596/81/1/012009/pdf
https://doi.org/10.1088/1742-6596/81/1/012009
[51] Aaetsen, M.G., et al. (2014) Physical Review Letters, 113, Article ID: 101101.
https://arxiv.org/abs/1405.5303
[52] Gorham, P.W., et al. (2016) Physical Review Letters, 117, Article ID: 071101.
https://arxiv.org/pdf/1603.05218.pdf
[53] Bennett, G.W. (2004) Measurement of the Negative Muon Anomalous Magnetic Moment at 0.7 ppm.
https://www.g-2.bnl.gov/hepex0401008.pdf
https://doi.org/10.1103/PhysRevLett.92.161802
[54] Aoyama, T., et al. (2020) Physics Reports, 887, 1-166.
https://arxiv.org/abs/2006.04822
[55] Abi, B., et al. (2021) Physical Review Letters, 126, Article ID: 141801.
https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.126.141801
[56] LHCb Collaboration (2012) Physical Review Letters, 108, Article ID: 111602.
https://arxiv.org/abs/1112.0938
[57] Aaij, R. LHCb (2019) Physical Review Letters, 122, Article ID: 211803.
https://arxiv.org/pdf/1903.08726.pdf
[58] CERN (2014) The Matter-Antimatter Asymmetry Problem.
https://cds.cern.ch/record/1998489?ln=en
[59] Sakharov, A.D. (1967) JETP Letters, 5, 24-27.
http://www.jetpletters.ac.ru/cgi-bin/articles/download.cgi/1643/article_25089.pdf
https://doi.org/10.1070/PU1991v034n05ABEH002497
[60] Lemaître, G. (1927) Annales de la Société scientifique de Bruxelles, A47, 49-59, 83-89.
http://www-cosmosaf.iap.fr/LEMAITRE.pdf
[61] Luminet, J.P. (2014) Lemaître’s Big Bang. Frontiers of Fundamental Physics 14, Marseille, 15-18 July 2014, 10 p.
https://doi.org/10.22323/1.224.0214
https://arxiv.org/ftp/arxiv/papers/1503/1503.08304.pdf
[62] Meessen, A. (2010) Journal of Unconventional Electromagnetics and Plasmas (UEP, India), 4, 163-179.
https://www.meessen.net/AMeessen/Ball-Lightning-Theory.pdf
[63] Luminet, J.P. (2011) General Relativity and Gravitation, 43, 2911-2928.
https://arxiv.org/pdf/1105.6271.pdf
https://doi.org/10.1007/s10714-011-1213-7
[64] Meessen, A. (2017) Journal of Modern Physics, 8, 251-267.
http://file.scirp.org/pdf/JMP_2017022811200542.pdf
https://doi.org/10.4236/jmp.2017.82017
[65] Abbot, B.P., et al. (2016) Physical Review Letters, 116, Article ID: 241103.
https://arxiv.org/ftp/arxiv/papers/1606/1606.04855.pdf
[66] Frey, C.L. and New, K.C.B. (2011) Living Reviews in Relativity, 60, 2003.
[67] Kotake, K. and Kuroda, T. (2017) Gravitational Waves from Core-Collapse Supernovae. In: Handbook of Supernovae, Springer, Berlin, 1671-1698.
https://doi.org/10.1007/978-3-319-21846-5_9
[68] Meessen, A. (2011) Space-Time Quantization, Elementary Particles and Dark Matter.
https://arxiv.org/ftp/arxiv/papers/1108/1108.4883.pdf

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.