Law of Probability Equilibrium (LPE) and Bounded Randomness: Light in the Universe, Photon Emission and Black Holes ()
1. Introduction
The treatment of randomness and determinism has been a constant concern throughout human history, encompassing philosophical, mathematical and physical dimensions. From antiquity to the present, these concepts have been the subject of intense debates and in-depth studies. For Aristotle, the possibility of identifying the cause of an event did not exclude that it occurred by chance. Galileo (1632) [1] stated that whenever a firm and constant alteration is observed in an effect, there must be a firm and constant alteration in the cause. Spinoza (1677) [2] argued that freedom is the cause of what happens but also the cause of its determination. According to Laplace (1819) [3], chance is a measure of our ignorance regarding the true causes. Every phenomenon, no matter how small, has a cause in the laws of nature (Poincaré, 1918) [4].
Planck (1900) [5] recognized that the quantum hypothesis challenged classical physics and that the discontinuity in thermal radiation led to accepting the statistical nature of emission. Later, in 1914, Planck [6] proposed that quantum indeterminacy is, in fact, an indeterminacy of knowledge: while the system is fully determined, we do not know the causes behind chance. Einstein, Podolsky, and Rosen (1935) [7] argued that quantum mechanics offered an incomplete description of the physical world, as it predicted phenomena such as action at a distance without allowing deterministic predictions. In contrast, Born (1956) [8] asserted that the determinism of classical physics was an illusion based on the overestimation of logical-mathematical concepts. Later, Born (1960) [9] acknowledged that quantum physics implies renouncing the full prediction of future events, which also means giving up the total determinability of the position and time of a particle whose momentum and energy are known.
By the mid-20th century, classical determinism had begun to decline, giving way to the necessity of statistical and probabilistic approaches. Thus, causality and randomness became complementary concepts in explaining physical phenomena. For example, the inequality derived from Bell’s experiments states that two laboratories measuring the same phenomenon may differ in their values with a probability of at least 5/9 (Cucchietti, Paz, Zurek, 2005) [10].
In recent decades, Pearl (2000) [11] noted that probability introduces uncertainty into cause-and-effect relationships, emphasizing that a high numerical correlation between two variables does not necessarily imply causation (Pearl, 2001) [12]. This idea is based on Kolmogorov’s (1933) [13] axiomatic framework, which formalizes probability theory and enables the analysis of random behavior in various contexts. More recently, chaotic systems, which incorporate both stochastic and deterministic components, have gained relevance (Lucio, Caballero, 2005) [14]. This approach integrates macroscopic classical physics with probabilistic and subatomic quantum physics.
From a quantum perspective, light, whose detection is based on probabilities, remains a fascinating enigma that invites us to explore beyond the visible. In 1823, Olbers wondered why the sky was not brighter. In an infinite and static universe, the high density of stars in every line of sight should make the night sky luminous. However, we observe both dark and bright regions in the universe, raising fundamental questions about the existence of light and our ability to detect it.
In this context, this research seeks to establish a probability law that defines the boundary between conditioning and freedom, certainty and chance, impossibility and possibility, as well as between causality and randomness. In other words, to formalize a mathematical tool to calculate the probability of random phenomena in the universe, photon emission, and light in the cosmos.
2. Method
Based on the axiomatic and empirical-frequentist approaches to probability, the Law of Probability Equilibrium (LPE) and its associated concept, the Probability Potential (|ψp|), were formulated and demonstrated. To support this work, the probability theories of Bernoulli, Gauss, Chebyshev, Borel, and Kolmogorov were integrated. The LPE was experimentally verified using random functions in an electronic spreadsheet, which employs mathematical formulas to generate sequences of pseudo-random numbers. Additionally, to validate the obtained results, numerical, tabular, and graphical statistical methods typical of descriptive statistics (Evans, Rosenthal, 2010) [15] were used.
Subsequently, the LPE was applied to estimate the probability of detecting photons from a light source, as well as the probability of detecting stars in the universe. For this purpose, six different approaches were employed: analog, probabilistic, wave physics, corpuscular physics, astronomical, and cosmological. Complementarily, inductive statistical methods were implemented to adjust regression models in computational calculation programs. Finally, the LPE was validated in the context of energy balance and photon emission.
3. Law of Probability Equilibrium (LPE)
3.1. Statement
Let Ω be a sample space that satisfies Kolmogorov’s axioms (1956) [16], and two mutually exclusive events A and
with strictly positive probabilities, such that:
Then, the cumulative distribution functions of both events satisfy
at their respective medians, and for every
,
holds. The functions are mutually constrained, cannot accumulate beyond that point, and define a state of probabilistic equilibrium and entanglement, regardless of the initial asymmetry in the probability values of the two events.
3.2. Demonstration
Consider a sequence of independent trials in which, in each trial, event A occurs with probability (p) and event
occurs with probability (1-p). We define the random variables:
N: number of trials needed for A to occur (with a geometric distribution of parameter p).
: number of trials needed for
to occur (with a geometric distribution of parameter 1 - p).
The corresponding cumulative distribution functions (CDFs) are:
(1)
(2)
The median of a discrete variable is defined as the smallest integer m that satisfies.
. Let:
m: median of N
: median of
We say that the system is in a state of equilibrium (in the sense that the “typical” times for the occurrence of A and
are equal) if and only if
. The LPE states that
. That is, equilibrium in occurrence (measured through the medians of the waiting times in the geometric distribution) is reached if and only if both events occur with the same probability,
.
The median of N (the number of trials until A occurs) has a cumulative distribution function (CDF) given by:
(3)
Solving this equation:
(4)
Taking logarithms (using the natural logarithm):
(5)
from where it is obtained:
(6)
Observation: since 0 < p < 1, both
< 0 and
< 0, so (m) is positive.
The Median of
(the trials until event
occurs) is similarly defined, and the CDF of
is:
. The median m' is defined as the smallest (n) such that:
(7)
which implies:
(8)
Taking logarithms:
(9)
Solving for:
(10)
3.3. Consequence: Equilibrium Condition
When
, the system is in equilibrium. Therefore:
. Since
is a negative but non-zero number, it can be canceled:
(11)
Since the logarithmic function is strictly increasing, we have:
(12)
Given that the logarithmic function is strictly increasing, we have:
(13)
Since A and
are complementary events in the same sample space (Ω), if
, therefore
and
, when
, both become
, so the equality holds for every n, therefore:
The above equality implies that if one of the distribution functions accumulated a probability of up to ½, the other necessarily did so as well, simultaneously in the same trial. Therefore, both event A and event
occurred at least once. Thus, an infinite number of failed attempts is not required. This demonstrates that events with positive and mutually exclusive probabilities are entangled, as are their associated random variables. The probability system prevents either of the cumulative distribution functions from reaching a value of one, since
. This property redefines the traditional concept of a distribution function: instead of each function individually reaching the value of one, that value is shared between complementary functions that each converge to ½. Therefore, the LPE encompasses the entire probabilistic field, as events with probabilities of zero or one correspond to deterministic cases. In this way, it reflects a hidden determinism within randomness: although the process is random, the equilibrium reached guarantees with certainty that both events will occur if they initially have a probability greater than zero.
4. Probability Potential ∣ψp∣
4.1. Observation
The LPE holds in all cases of complementary events within the sample space Ω, as long as the empirical probabilities of the events are different from zero (0) and one (1). If event A has a probability of one (1), it would be a certain event, and its complement
would be an impossible event with a probability of zero (0). The same applies in the opposite case. These initial conditions would be deterministic rather than probabilistic, and are therefore excluded from the probabilistic domain and from the LPE. If both events have positive probabilities, then due to the LPE, they will have absolute frequencies strictly greater than zero (0) and strictly less than n. The observed absolute frequencies of event
would be
, while the observed absolute frequencies of the complementary event
would be
, with
in each trial. Consequently, the probability of an event transitions toward its frequentist or empirical version, which is obtained from the ratio between the observed frequencies and the number of trials. As a direct empirical derivation of the LPE, the probability potential |ψp| is defined.
4.2. Definition of Probability Potential ∣ψp∣
Let 𝐴 be an event with a probability strictly greater than zero (𝑝 > 0), and let
be its complementary event in the sample space Ω, also with positive probability. Let 𝐹(𝐴) be the cumulative probability distribution function of A , and let
be the cumulative probability distribution function of
, such that:
and
(as a consequence of the LPE). Where (1 − 𝑝) is the probability of occurrence of the complementary event
. The probability potential |ψp| is defined as:
(14)
4.2.1. Case when ∣ψp∣ = F(A) − F(A′) = 0 (median delay)
By the axiom of certainty, 𝑃(Ω) = 1, which means the probability potential is zero (|ψp| = 0) when
, and both functions are equal to ½. Starting from
, we have:
(by LPE). Equating both functions:
. This leads to the equation:
. Let
, then:
. In turn, 𝑞 is the probability of occurrence of
. Therefore, if by the probability axiom we have that
, then:
. Solving for the probability of 𝐴, we get:
. Then, in trial n, a probability of ½ has been accumulated, which is the value of F(A) as already demonstrated. Therefore, we define this trial as the median frequency (mfA) of event A accordingly:
(15)
where:
is the median frequency of event A (delay), that is, the number of trials required to reach a cumulative probability of ½.
4.2.2. Case when ∣ψp∣ = F(A′) = 1/3 (average delay)
If the probability potential is equal to the improbability
, the equation is:
(16)
The equation implies:
. If
is
, therefore,
,
. Defining the probability of the complement:
and applying the total probability condition,
.
(17)
Since we are dealing with random experiments, p(A) represents the relative frequency of event A, denoted as (hi), the relative frequency in trial i. The probability is self-adjusting: if p(A) increases, (1/3)p decreases, maintaining the balance. The probability never reaches 1, as (1/3)p is always positive, meaning p(A) will never exactly equal 1. This self-regulation occurs because the probability of success in A dynamically adjusts according to the number of experiments conducted.
4.2.3. Case when ∣ψp∣=γ(maximum delay) (Euler-Mascheroni constant)
Thus, in empirical functions, if |ψp|=0, by the statement, it is given that
. Additionally, if we assume p = 1/n, we arrive at
. From equation (17), we know that
. By equating both expressions, we obtain: (½)p = 1 − (1/3)p ⇒ 1 = (½)p + (1/3)p. The numerical solution to this equation is p≈0.788605. Therefore, 1 − p = 0.21140. Thus, the maximum accumulable probability of the system is 0.788605, giving an improbability of 0.21140. Finally,
, a value that corresponds to the Euler-Mascheroni constant (γ). At this point, it must be that:
. In turn, at this point:
.
If |ψp|= F(A), then
, since
. According to the definition of the probability potential, we know that
. If we assume that
as
, this occurs when the event A repeats in a sequence of n times, leading to the conclusion that the probability of
is zero. This implies that event (
) is impossible and does not fit within the LPE, which only considers events with positive probabilities and probabilistic cases.
In an infinite number of trials, the probability of success may tend to 1, according to Borel-Cantelli’s lemma (Borel, 1909) [17], under specific conditions such as when the events are independent and the sum of their probabilities is infinite. This differs from what is stated in the LPE.
4.2.4. Implications of LPE and the Probability Potential ∣ψp∣
The appearance of values like 2/3 (≈0.666) or 0.788, despite the equilibrium being at ½, stems from differentiating observation levels in the LPE model. The median represents the system’s regulatory center, where
, a point of symmetry that balances the internal tensions between event A and its complement. The average shifts towards 2/3 due to the asymmetry of the geometric distribution, as event A requires several trials to occur. The maximum accumulable of 0.788 marks the upper limit tolerated by the system, corresponding to the corrected average potential |ψp|≈0.577, associated with the Euler constant. Although the structural equilibrium is at ½, the observed average can exceed this value due to the asymmetric nature of the distribution, and the system can accumulate up to 0.788 without breaking the equilibrium.
When both events have positive probabilities, the LPE ensures that randomness remains within a bounded interval, preventing one event A from occurring indefinitely with the permanent failure of its complement
, and vice versa. That is, no cumulative function tends asymptotically to 1; instead, both converge to 1/2. This eliminates the absolute certainty of either event occurring alone and affirms, instead, the certainty that both occur, establishing bounded randomness and a shared determinism within a probabilistic framework.
Example 1. Let p > 0 be the probability associated with an event (A) with p = 0.01 and let (1 − p) > 0 be the probability associated with event (
) with (1 − p) = 0.99. Then: (0.01) (log₀.₀₁ ½) = (0.99) (log₀.₉₉ ½) → (0.01)(0.1505) = (0.99)(68.9676)→ (½) = (½). In this case the probabilities p and 1 − p are different from ½, likewise the LPE is fulfilled because the cumulative is reached in ½ in the same iteration.
Example 2. Programmed verification: for 6 rows of row 10000 of the spreadsheet, it was proposed: = A9999 + 1; = RAND. BETWEEN (1;10); = IF (B10000 = 2;1;0); = COUNTIF (C$1: C10000;1); = E10000/A10000; = 1 − (POWER(1/3;1-POWER (½;1/LOG (½;1-F10000))); =1 − POWER (1/3;F10000). The programming indicates that in the columns of row 10000, a random number between 1 and 10 was experimented, which if it were (2) two was a success, and in turn counted the accumulated successes from the first row. The results were: 9999; 6; 964; 0.10; 0.10; 0.10. The three final values verify LPE.
Example 3. Let the cumulative probability function of an event (A) with a positive probability (p = 0.10) be determined by the simple expression
. It is cumulative because it forces permanent failure and an increasing exponent that marks the number of trials. The accumulation of probability necessarily goes to the detriment of the deaccumulation of the probability of event (
), which in this case occurs for the function
, which is called the deaccumulative or improbability function (Figure 1).
Figure 1. Probability potential
and cumulative distribution functions of a (probability) and decumulative distribution of 𝐴′ (improbability).
If |ψp| = 0, the graph shows how F(A) and
intersect at ½, when the probability potential |ψp|=0, which implies the probabilistic equilibrium marked by the LPE. It can be observed that the intersection point occurs at the median frequency of (A) given by the expression log0.9 (½) = 6.5788. This value represents the number of trials required to reach a probability of ½ of occurrence of event (A), designated in the LPE as the median delay of event (A). In contrast, the median delay of event (
) is log0.1 (½) = 0.3010. The average delays of (A) and (
) are respectively 1/0.1 = 10 and 1/0.9 = 1.1111. The probability of event (A) is given by p(A) = (½)¹/0.30103 = 0.10, and the probability of event (
) is given by p(
) = (½)¹/6.5788 = 0.9, both derived from the LPE.
If
, also according to the LPE, when
, the expression (q) = (1/3) (1/ⁿ) holds, with 1/n = p. It is satisfied because (1/3) (1/10) = 0.9, which is precisely
. Thus, p(A)= 1 − (1/3)(1/ⁿ) = 0.1. At this point, it can be seen that F(A) = 2/3 and
, when it occurs in repetition 10.5 (continuity correction). When the delay for random reasons exceeds the median value and approaches the average value, the probability (𝑝) self-regulates by decreasing its value. Therefore, the probability at the median is higher than the probability at the average: p (median) > p (mean). At the median, we have: 𝑝 = 1 − (½)¹/ⁿ/2, and at the average, we have: 𝑝=1 − (1/3)¹/ⁿ. Given that 2 ln(2) ≈ 2 (0.693) = 1.386 and ln(3) ≈1.099, therefore 2 ln(2) > ln(3).
5. Validity of the LPE in Discrete Random Variables
Let X be a random variable with a Binomial probability distribution such that:
, where (n) is the number of trials and (
) is the probability of success of event (A), strictly greater than zero (0) associated with the trials and defined as:
, where (
) is the absolute frequency of the successes of the experiments. The domain of X is the set of possible values of successes in n trials, i.e.,
, and the codomain of X is the interval [0,1], corresponding to the relative frequencies
of event A. According to this law, the probability of the event and its complement converge to ½ at equilibrium, implying that the system maintains balance in its probability structure. We take the relative frequency (
) and the average absolute frequency (
) and define the probability distribution function of the random variable X from the statement, based on the expression of the Binomial Model according to:
(19)
Thus, for the same Bernoulli trials, we define another Binomial random variable Y such that:
, where (n) is the number of trials and
is the probability of success of event (
), strictly greater than zero (0) associated with the trials and defined as:
. The domain of 𝑌 is the set of possible values of failures in n trials, i.e.,
, and the codomain of Y is the interval [0,1], corresponding to the relative frequencies
of event
. The probability distribution function F(Y) can be expressed from the Binomial model according to:
(20)
Equations 19 and 20 present complementary combinations, so they can be equated according to:
(21)
It is possible to divide each member of the inequality in the previous equation by (n) as follows:
. Then substitute the respective probabilities of the events (A) and (
) as follows:
. Furthermore, if the domain of X is defined by absolute frequencies
, then there is at least one value of the random variable X, that is, a
that satisfies the inequality, and also, if the domain of Y is defined by absolute frequencies
, then there is at least one value of the random variable Y, that is, a
that satisfies the inequality. Therefore, we can rewrite it as:
. Substitute by the relative frequency in each member of the previous equation and we have:
. It is a consequence of the symmetry of the problem, since both probabilities refer to complementary events that converge to equal values at the equilibrium point. Then we simplify as:
.
However, both
and
represent complementary probabilities in the model. Within the framework of the LPE, the symmetry of equilibrium implies that the only possible solution is
, Since the accumulation and reduction of probability must be equal to maintain equilibrium. Moreover, (
) and
are the only probabilities defined for the sample space omega (Ω). Therefore, by the probability theorem for mutually exclusive events (empty intersection), we have:
and
. Similarly, we have:
. Thus, we can assert that necessarily:
(22)
At equilibrium, the probability mass is distributed symmetrically, leading to the stabilization of individual probabilities at ½, ensuring that
. Given that the random variable X can only take two mutually exclusive possible values, we evaluate F(X) for A
equal to zero (0), that is, permanent failure according to:
(23)
When performing repeated Bernoulli experiments for the random variable X, where we obtain only successive failures; concomitantly, we will have streaks of permanent successes in the random variable Y that can be evaluated as:
(24)
Given that by equation 21, we had F(X) = F(Y), we can then equate equations 25 and 26, according to:
(25)
By equation 21, we found that
, so we can substitute in the previous expression according to:
(26)
Since hi and
are the only probabilities in omega, they must sum to 1. At equilibrium, their values stabilize at ½, and the cumulative distribution functions of (A) and (
) Are in equilibrium according to LPE: F(X) = ½ = F(Y). As
, the probabilities converge to their expected values. The symmetry of the distribution then implies that the cumulative probabilities of X and Y must be equal at equilibrium. This confirms that F(X) = ½ = F(Y) is not just a property of the finite case but an intrinsic characteristic of probabilistic equilibrium.
6. Validity of the LPE in Continuous Random Variables
By the strong law of large numbers (LGN) (Chebyshev, 1867) [18], it follows that as the number of trials (n) tends to infinity (∞), the relative frequency of X (hi) converges to the theoretical probability (p), and also the relative frequency of Y (1 − hi) converges to the theoretical probability (1 − p) according to:
(27)
(28)
Both expressions confirm that equation 27 and 28 hold in limit cases, and the discrete random variables (X and Y) converge to random variables with continuous approximation. In these equations, ϵ represents a small margin of error used to account for minor deviations in the system. It is introduced to ensure validity despite small discrepancies, not as a random variable but as an arbitrarily chosen error threshold. The random variable (X) is binomial with mean (µ = np) and variance (σ² = (np) (1 − p)), and by the De Moivre-Laplace central limit theorem, when (n) is large, it is approximately distributed as a Gaussian Normal Z~N (0,1). Therefore, Z = (x − µ)/σ~N (0,1) = Z (x − np)/(np (1 − p)) ~ N. However, if the substitution is made for the relative frequencies (hi = fi/n):
(29)
The relative frequencies hi are defined as the ratio of the observed frequency
to the total number of trials n, i.e.,
, which allows the substitution
in the equation. When n is large enough, the normal approximation is valid, justifying the use of the normal distribution for Z. This substitution leads to a standardized normal variable centered at Z = 0, confirming that the expected value of Z is zero.
(30)
This expression Z = 0 arises because, as
n, the relative frequency
converges to the true probability p, and the difference between the observed frequency and the expected value vanishes. The expected value of the standardized normal variable is 0, but this does not imply that Z = 0 is the only possible value. This indicates that z is centered at 0, meaning it follows a normal distribution with a mean of zero. Z~N (0,1), which means it can take positive and negative values around 0. The numerator of the previous expression is zero (0), and by definition, the random variable Z~N (0,1) is evaluated at the value (Z = 0) with an accumulated probability value equal to ½ according to:
(31)
Due to the symmetry property of the Gaussian Model, when evaluating the probability of z assuming values greater than zero (0), an accumulated probability of ½ is also obtained, thus satisfying:
(32)
These equations represent the cumulative distribution function F(z) of the standard normal variable Z. Due to the symmetry of the normal distribution, the probability that z ≤ 0 is ½, and the same applies for z ≥ 0. This symmetry ensures that the cumulative probability at Z = 0 is ½, with the probabilities for Z < 0 and Z > 0 each summing to ½, maintaining balance. This follows from the law of total probability, which asserts that the sum of mutually exclusive events must equal 1. In this case, the probabilities for Z being less than and greater than 0 sum to 1, satisfying the probability axiom. The stationary equilibrium of both cumulative probability distributions is established according to the LPE, consistent with the PGT (Probability Gradient Theorem) (Traversa, 2021) [19].
7. Results and Discussion
7.1. Empirical Verification of LPE
A spreadsheet software setup is proposed as follows: first column: (n) (iteration number); second column: (random) (random number between 1 and 5); third column: (result) (success 1, failure 0); fourth column: (delay of event (A)); fifth column: (delay of event (
)); sixth column: (absolute frequency of (A)); seventh column: (absolute frequency of (A′)); eighth column: (relative frequency of (A)); ninth column: (relative frequency of (
)); tenth column: (cumulative binomial distribution of (A); eleventh column: (cumulative binomial distribution of (
)); twelve column: probability potential |ψp|.
For example, for a random variable (X) with random results of natural numbers: 1, 2, 3, 4, and 5. it is designated as a conditional function that event (A) will be a success if we obtain (2). The initial a priori probability of (A) is (p) = 1/5. For the random variable (Y) of the complementary event (
), the a priori probability is (1 − p) = 4/5. All according to the mathematical syntax presented in Table 1.
Table 1. Syntax in spreadsheet for row number ten (10).
Variable |
Excel function |
Random |
= RANDBETWEEN(1, 5) |
Success |
= if (b10 = 2; 1; 0) |
Delay (A) |
= if (c10 = 1; 0; d9 + 1) |
Delay (A′) |
= if (c10 = 0;0;e9+1) |
Absolute frequency (A) |
= countif (c$2:c10;1) |
Absolute frequency (
) |
= countif (c$2:c10;0) |
Relative frequency (A) |
= f10/a10 |
Relative frequency (
) |
= g10/a10 |
Cumulative Binom. Distr. (A) |
=1 – binomdist (f10; a10; h10; true) |
Cumulative Binom. Distr. (
) |
=1 − binomdist (g10; a10; i10; true) |
Probability potential ∣ψp∣ |
= abs (j10 − k10) |
The results of the 1999 iterations in Table 2 and Figure 2 demonstrate that both functions stabilize in binomial cumulative distributions at respective values of ½ for both events (A) and (
). Neither function reaches a value of 1, but their sum does, defining the cumulative distribution function of the experiment, which always converges to 1 as the number of iterations increases. This confirms the equilibrium state of the LPE.
Table 2. verification of LPE in binomial cumulative probability distributions for two events (A) and (A′) in 1999 iterations, both with positive probabilities [p(A) = 1/5; p(
) = 4/5].
|
A |
B |
C |
D |
E |
F |
G |
H |
I |
J |
K |
H |
1 |
N |
Random |
Success |
Delay (A) |
Delay (
) |
Abs. Freq. (A) |
Abs. Freq. (A′) |
Relat. Freq. (A) |
Relat. Freq. (
) |
Cumulat. binomial distrib. (A) |
Cumulat. binomial distrib. (
) |
Probab. potent.
|
2 |
1 |
4 |
0 |
1 |
1 |
0 |
1 |
0.000 |
1.000 |
0.00000 |
0.00000 |
0.0000 |
3 |
2 |
3 |
0 |
2 |
0 |
0 |
2 |
0.000 |
1.000 |
0.00000 |
0.00000 |
0.0000 |
4 |
3 |
4 |
0 |
3 |
0 |
0 |
3 |
0.000 |
1.000 |
0.00000 |
0.00000 |
0.0000 |
5 |
4 |
1 |
0 |
4 |
0 |
0 |
4 |
0.000 |
1.000 |
0.00000 |
0.00000 |
0.0000 |
6 |
5 |
2 |
1 |
0 |
1 |
1 |
4 |
0.200 |
0.800 |
0.26272 |
0.32768 |
0.0650 |
7 |
6 |
1 |
0 |
1 |
0 |
1 |
5 |
0.167 |
0.714 |
0.26322 |
0.33490 |
0.0717 |
8 |
7 |
2 |
1 |
0 |
1 |
2 |
5 |
0.286 |
0.750 |
0.32077 |
0.36049 |
0.0397 |
9 |
8 |
4 |
0 |
1 |
0 |
2 |
6 |
0.250 |
0.778 |
0.32146 |
0.36708 |
0.0456 |
10 |
9 |
4 |
0 |
2 |
0 |
2 |
7 |
0.222 |
0.800 |
0.32190 |
0.37200 |
0.0501 |
11 |
10 |
5 |
0 |
3 |
0 |
2 |
8 |
0.200 |
0.794 |
0.32220 |
0.37581 |
0.0536 |
1990 |
1989 |
3 |
0 |
1 |
0 |
410 |
1579 |
0.206 |
0.794 |
0.48678 |
0.49111 |
0.0043 |
1991 |
1990 |
1 |
0 |
2 |
0 |
410 |
1580 |
0.206 |
0.794 |
0.48678 |
0.49111 |
0.0043 |
1992 |
1991 |
5 |
0 |
3 |
0 |
410 |
1581 |
0.206 |
0.794 |
0.48678 |
0.49111 |
0.0043 |
1993 |
1992 |
1 |
0 |
4 |
0 |
410 |
1582 |
0.206 |
0.794 |
0.48678 |
0.49111 |
0.0043 |
1994 |
1993 |
3 |
0 |
5 |
0 |
410 |
1583 |
0.206 |
0.794 |
0.48678 |
0.49112 |
0.0043 |
1995 |
1994 |
3 |
0 |
6 |
0 |
410 |
1584 |
0.206 |
0.794 |
0.48678 |
0.49112 |
0.0043 |
1996 |
1995 |
2 |
1 |
0 |
1 |
411 |
1584 |
0.206 |
0.794 |
0.48680 |
0.49112 |
0.0043 |
1997 |
1996 |
1 |
0 |
1 |
0 |
411 |
1585 |
0.206 |
0.794 |
0.48680 |
0.49112 |
0.0043 |
1998 |
1997 |
4 |
0 |
2 |
0 |
411 |
1586 |
0.206 |
0.794 |
0.48680 |
0.49113 |
0.0043 |
1999 |
1998 |
2 |
1 |
0 |
1 |
412 |
1586 |
0.206 |
0.794 |
0.48681 |
0.49113 |
0.0043 |
2000 |
1999 |
2 |
1 |
0 |
2 |
413 |
1586 |
0.207 |
0.793 |
0.48683 |
0.49114 |
0.0043 |
In the same experiment, continuity correction can be made for binomial approximation to the Gaussian Normal Distribution using the following equations: = NORM. DIST (H2000, H1999, SQRT(F1999H1999I1999), 1) and = NORM. DIST (I2000, I1999, SQRT(F1999H1999I1999), 1). Both also converge to (0.49).
Figure 2. Verification of LPE in binomial cumulative probability distributions for two events (A) and (A´) in 1999 iterations, both with positive probabilities [p(A) = 1/5; p(A´) = 4/5].
7.2. Probabilistic Duality: Corpuscular and Wave-Like Behavior
In Figure 3, an electronic computational random experiment was simulated based on 1999 iterations of observations. Two complementary events are defined. Initially, probabilities were assigned in the experiment: p = 0.2 for event A and 1 − p = 0.8 for its complement
. However, after running the model, it was observed that the system reaches a stationary state where the cumulative functions of these events balance at F(X) = 0.5 and F(Y) = 0.5. This indicates that, although A and
are opposite events, their effects cancel each other out, establishing a probabilistic equilibrium. In this state:
(1) A is an accumulative process: the more frequently it occurs, the greater F(X) becomes.
(2)
is a de-accumulative process: as it occurs, F(Y) decreases.
Both functions are entangled at ½. The particle-like behavior appears when the system is measured: only one of the two possible states is observed, collapsing the wave function into a definite outcome (Schrödinger, 1935) [20]. This is associated with the probability density function f(x), which describes the discrete occurrence of events. The functions f(x) and f(y) represent the likelihood of observing 1 or 0 (success or failure) when measuring A or
. The interference is destructive: if A occurs,
cannot occur. In Figure 3, each peak in f(x) corresponds to a valley in f(y), and vice versa.
Figure 3. Complete verification of the LPE in binomial cumulative probability distributions for two complementary random variables (X) and (Y) in 1999 iterations, both with positive probabilities [p(x) = 1/5; p(y) = 4/5].
The wave-like behavior is reflected in the overlap of F(X) and F(Y) at the value 0.5. This aligns with principles of quantum mechanics and Bell’s experiments, which show a ½ probability that two observers obtain different results (Nielsen & Chuang, 2000) [21], and a complementary ½ probability of obtaining the same results. Each trial of the experiment involves both events A and
, with dual probabilities:
(1) A: 0.2 as a particle, 0,5 as a cumulative wave.
(2)
: 0.8 as a particle, 0.5 as a cumulative wave.
If A occurs with a probability p = 0.2 in each trial, we can calculate how many trials are needed for the cumulative probability of observing A to reach 50%. Solving 1 − (0.8)ⁿ = 0.5 yields approximately four trials. Thus, event A displays probabilistic duality: f(x) = 0.2 as a point value, and F(X) = 0.5 as an accumulated probability. According to the uncertainty principle (Heisenberg, 1927) [22], it is impossible to predict the outcome of each trial, emphasizing the probabilistic nature of the system.
The random variables X and Y are entangled by the design of the experiment. According to the LPE, two complementary events with positive and opposing probabilities inevitably occur together. This intrinsic causal and random interconnection reflects how probabilities organize themselves coherently. The maximum probability superposition is (Ω) = 1, while destructive interference cancels out at the minimum probability potential |ψp| = 0. This is analogous to the median delay: the number of trials needed to reach a significant cumulative probability.
Conceptually, this duality between point probability [(f(x), f(y)] and cumulative probability [F(X), F(Y)] expresses the principle of complementarity (Bohr, 1928) [23], where the description of a system depends on the context of observation. The probability system directly reflects the LPE, which states that the maximum accumulation of probability is equivalent to the total energy of the system Ω (Figure 3). The probability of each event is distributed in such a way that the system reaches a stable equilibrium, with the probabilities of A and
balancing at 50%. The probability potential |ψp| = 0 marks this equilibrium point (Figure 3). In this model, the phenomenon is governed by causality and bounded randomness (Traversa, 2021) [19]. This behavior can be summarized in the following equations:
(33)
(34)
Equation (33) defines that the probabilities of complementary events sum to 1, meaning f(x) + f(y) = 1. Where fi and n−fi are the absolute frequencies of two mutually exclusive events in Ω and F(X) and F(Y) are the probability density functions of two events with probabilities greater than zero and complementary in Ω.
7.3. LPE and Calculation of the Probability of Light (pL)
Light in the universe is not infinite; we do not have a bright sky (Olbers’ paradox). Therefore, intercepting light is a possible event, but not an absolute one. In other words, the opposing events are the possibility of darkness, which we call event (pd), and the possibility of light, which we call event (pl). It is obvious that both have positive probabilities and therefore fall within the LPE.
7.3.1. Analog Method
It is calculated from the average delay derived from empirical probability. Although mathematics disregards the units of probability, probability in physics has units (it is not dimensionless). For example, when rolling a die, the probability of getting the number two (2) is one-sixth (1/6). However, the probability should be read as 1/6 (s/t), where (s = success) are hits or successes and (t = trials) are trials. Similarly, for an event (A) with a success probability of 0.01, the probability should be read as p = 0.01 (s/t), meaning there is a one-hundredth chance of having a hit in every one hundred (100) experiments. Therefore, probability is a measure of the success rate, so 0.01 is a lower rate than 0.16 (die) and this in turn lower than 0.5 (coin), with the maximum rate being 1 (sure event). The denominator of probability is called delay, which is the number of experiments needed or the delay to achieve success. From now on, delay will be referred to as delay. In the example of a die, the average delay (Ad) is six (6) because in terms of mathematical expectation, an average of six rolls would be needed to achieve success. Additionally, this is directly correlated with time, as it would take more time to achieve success with a probability of 1/6 than with a probability of ½, therefore, the higher the probability, the less time needed and the higher the speed.
In the case of light, there is also a delay to achieve success; this delay is also linked to time and is one second for 3 × 108 m. The median delay (Md) is the number of experiments required for the cumulative probability of achieving success to be ½. The units of probability are inverse to the units of delay. If the delay is trials (t) to obtain hits (s), the probability will be hits (s) per trials (t). In the case of light, the delay is measured in (m/s); therefore, the probability is dimensioned in (s/m). Kepler in the 17th century affirmed the infinity of the universe, and Halley, a century later, understood that darkness was due to unequal distribution. In the 19th century, Olbers presented the dilemma that if the universe is infinite, it must also contain an infinite number of luminous stars uniformly distributed, which would result in a bright sky without dark areas (Wesson, 1991) [24]. However, observing the sky, it is noticeable that there is a probability of observing areas of light and areas of darkness, related to respective probabilities of light and darkness (Table 3).
Table 3. Experimentation of three physical events and calculation of the respective median delay and average delay for obtaining success from LPE.
Experiment |
Probability |
LPE |
Median delay |
LPE |
Average delay |
Success dice (A) |
0.1667 (s/t) |
=log (½;0,8333) |
3.8018 (s/t) |
=power (½;1/0,3869) |
6 |
Failure dice (
) |
0.8333 (s/t) |
=log (½;0,1666) |
0.3869 (s/t) |
=power (½;1/3,8018) |
1.2 |
Event win (A) |
0.01 (s/t) |
=log (½;0,99) |
68.968 (s/t) |
=power (½;1/0,1505) |
100 |
Event lose (
) |
0.99 (s/t) |
=log (½;0,01) |
0.1505 (s/t) |
=power (½;1/68968) |
1.01 |
Detect light (A) |
3.33 × 10−9 (s/m) |
=log (½;0,999999997) |
207944150 (s/m) |
=power (½;1/0,0355) |
3 × 108 |
Detect darkness (
) |
0.999999997 (s/m) |
=log (½;3,33E-09) |
0.0355 (s/m) |
=power (½;1/207944150) |
1.000000003 |
Although the analogy between rolling a die and detecting light helps illustrate the concept of probability inversely related to “delay”, it is important to recognize that, in reality, these two phenomena are different: while in the die the probability reflects a frequency of events, in light the “delay” is linked to the physical propagation of a wave, and the probability of detection depends on additional factors that go beyond a simple inverse relationship with speed.
In this way, by comparative analogy, if the probability in rolling the die is the inverse of ad (1/6), then the probability of detecting a light source (pL) is the inverse of ad = 3 × 108 m/s, which is also a constant, and the average of a constant is the constant itself. The inverse of 3 × 108 is: 1/3 × 108 = 3.33 × 10−9 s/m, that is, the inverse of its speed (c), according to:
(35)
where:
pL is the probability of A light source (s/m)
AdL is the average delay of light (m/s)
This value represents the probability of detecting light per unit distance traveled, measured in seconds per meter (s/m). This implies that the probability of detecting light is measured based on the time it takes for light to travel a certain distance. In physical terms, light travels at c = 3 × 108 m/s. If we invert this value, we find that each meter traveled by light is equivalent to a “fraction of a second” in probabilistic terms, 3.33 × 10−9 s/m. The tangible interpretation does not directly indicate the probability of detecting a photon at a given moment but tells us how that probability is distributed spatially. If we consider a distance of 1 meter, the probability of detecting light in that path is 3.33 × 10−9 s/m. Therefore, the cumulative probability increases with the distance traveled by the light.
7.3.2. Probabilistic Method
It is calculated from the median delay derived from the LPE. It is necessary to obtain
, which is nothing more than the median delay of an event
, which, in turn, is the opposite event of A. In the case of light, the opposite event is darkness. substituting, we have:
= log (1 − p) (½) = log(0.999999997 ½) = 207944151 (s/m), where 0.999999997 is the probability of darkness, obtained from Table 2. Additionally, it is necessary to apply the equation (15) of probability potential (|ψp|) when the median delay occurs, which is p(A) = 1 − (½) (1/n), thus obtaining:
(36)
where:
is the probability of a light source (s/m)
is the median delay of light (m/s)
7.3.3. Wave Physical Method
This method is based on the wave nature of light. In classical probability, the likelihood of an event is measured as:
, where 𝑝 is the probability,
is the absolute frequency that describes the number of pulses passing through a given interval, and n is the length of the interval. If we extrapolate this concept to quantum probability, the expression that determines the probability (
) of finding a wave like light is:
(37)
where:
v is the frequency, describing the number of wavelengths passing through a point in space over a certain time.
λ is the wavelength.
The numerator 1/λ of the previous expression is the period (T), which is the time it takes for a wavelength to pass through a given point in space. Therefore, the probability (p) of a wave can also be written as: p = T/λ. The units of the probability of a light source are seconds (s) in the numerator and meters (m) in the denominator. To calculate the probability of a 550 nm wave using this method:
(38)
Although the calculation presents an interesting relationship between frequency, period, and wavelength, the term “probability” should be used with caution, as the obtained expression has units of s/m, suggesting a density of events rather than a classical probability, which should be dimensionless and between 0 and 1.
7.3.4. Corpuscular Physical Method
This method is based on the particle nature of light and the consequent emission of photons. The mathematical expectation of the number of photons (number of successes) is given by:
. where n is the sample size (number of repetitions of the experiment) and
is the probability of a photon being emitted. The probability of a photon (
) is given by the expression:
.
To answer the question: “how many photons does a 1-watt light source with a wavelength of 550 nm emit in 1 second?” we first calculate the energy (E) of a single photon:
(39)
Next, we measure the number of photons emitted per second for a power of one watt (1 J/s). the number of photons (N) per second is: N = (1 J/s) /(3.62 × 10−19 J/photon) = 2.76 × 1018 photons/second. Since light travels at a speed c = 3×108 m/s, the number of photons (N) per meter is:
(40)
Now, we calculate the probability of a single photon (
):
(41)
7.3.5. Astronomical Method
According to the Yale Bright Star Catalog (2024) [25], the number of cataloged stars is 9110. Of these, the magnitude values of 9096 stars are reported up to a value close to 6.5 in stellar brightness. The regression model between the accumulated stars by brightness had a determination coefficient of 0.99 (Figure 1). What the human vision perceives is not a star distant by hundreds of light-years, but the photons of its emitted light that arrive with a speed of c = 299,792,458 m/s, which is a measure of the theoretical delay of light in the vacuum (Hippke, 2018) [26]. Additionally, those photons are perceived on a visual convergence plane that cannot distinguish depth among the stars after that journey. The brightness (magnitude) regression fit of measurements of size n = 26750 satellites from Hainaut and Williams (2020) [27], by distance to earth, had a determination coefficient of 0.99 (Figure 4). On the other hand, according to the scientific literature, the detectable brightness (magnitude) limit for the human eye is six (6) (Kutner, 2003 [28]; Fujiwara and Yamaoka, 2004) [29]. The regression model between the brightness value of six (6) and the distance was determined at a radius value of 503 km.
Because the visual convergence is also consistent with Rodríguez and León (2003) [30], who mention that the human eye can observe distant lightning storms at 500 km, the same value found in this study (503 km). The determining variables of the convergence plane are: the intensity and refraction of stellar light signals, the actual distance to earth, the presence of nebulas, interstellar dust during the travel of light photons, particular atmospheric conditions (pressure, temperature, density, molecule ionization), and city light pollution (Figure 4). Additionally, there is scientific evidence that the model that crosses both regressions is satisfactory because for the brightness of six (6) (Yale, 2024) [25] it adjusted a theoretical ocular convergence visualization plane that included 5136 stars (56% of the 9096), a value close to the approximately 5000 stars reported by the international astronomical union (IAU) (2024) [31] (Figure 5).
![]()
Figure 4. Cumulative probability of visible stars (9096) arranged by magnitude classes.
Figure 5. Regression fit of satellites (n = 26750), magnitude by distance to earth in meters.
For the radius determined by the model (503 km), the convergence surface is a circular plane with an area of 7.95 × 1011 m². On this convergence surface, the probability of star light is determined as the number of favorable cases (number of stars = 5136) over the total number of cases (convergence plane surface = 7.95 × 1011 m2). The two-dimensional value is 6.46 × 10−9 s/m2, and the one-dimensional value is 3.23 ×10−9 s/m, similar to the values found by the analog, probabilistic, and physical models:
(42)
Although a dimensional reduction might suggest taking the square root of the surface density to obtain a one-dimensional value, this is not appropriate in this case. Assuming a symmetrical visual field and a uniform distribution, the linear probability density is estimated by dividing the surface density by two.
(43)
The results are consistent in order of magnitude with Perrinet (2021) [32], who mentions that the probability of a star (light source) decreases if its magnitude is lower (the more luminous a star, the lower its magnitude). This probability converges to 10−9.
7.3.6. Cosmological Method
In classical probability (p), the probability of an event is measured as: p = fi/n, where fi is the absolute frequency describing the number of impulses, and the number of impulses corresponds to the duration of the observation time, and 𝑛 is the length of the interval; extrapolating this to the universe, the number of impulses is given by the age of the universe (t), 4.24 × 10¹⁷ s, and the length of the interval is given by the radius (R) of the visible universe, 1.28 × 10²⁶ m. This relationship approximates the inverse of the speed of light: t/R ≈ 1/c, where t is time (in seconds), R is the radius (in meters), and c is the speed of light (in meters per second). The result of the quotient provides the probability of detecting the original photon, approximately 400,000 years after the Big Bang, which occurred around 13.8 billion years ago, according to:
(44)
Thus, the found value is very close to the quotient between Planck time (
) and Planck length (lP), (tP/lP ≈ 3.335 × 10-9 s/m). It should be noted that this is an approximation, as the universe is not simply “traveling” with light but is expanding. The average light probability among the six applied methods is 3.33 × 10−9 s/m. With this cosmological constant, it is possible to estimate the number of stars in the visible universe. The mathematical expectation of the number of stars is
, where
is the space-time interval and
is the calculated probability (3.33 × 10−9 s/m). Moreover, for the calculation, the surface cosmological constant 6.66 × 10−9 s/m² is applied. The two-dimensional calculation is:
according to:
(45)
The estimated number of stars exceeds the calculations of ESA (2019) [33]. However, these calculations are based on an estimation of the stars in the milky way and an extrapolation for the number of galaxies in the universe. Kornreich, cited by Space (2024) [34], suggests that there may be current underestimations and that a more detailed observation of the universe will reveal even more galaxies. Additionally, the Webb telescope discovered that the most distant galaxies in the universe are supermassive and could contain proportionally more stars than previously thought (Carniani et al., 2024) [35].
7.4. The LPE in Photon Emission and Black Holes
The emission of photons is governed by discrete processes, such as electron transitions between energy levels, and is quantized in terms of discrete energy packets associated with each transition. In the LPE model, these emissions are interpreted as a function of the number of possible iterations within a given time frame. This is directly linked to the LPE demonstrated in this work. In particular, the number of emitted photons is determined by the number of possible iterations within a given time. The inverse of the frequency is the period, which represents the time required to complete one cycle of the wave. Within the LPE framework, this period can be interpreted as the delay between quantum iterations-namely, the interval between two consecutive events-providing a probabilistic view of the transition time between quantum states. In quantum oscillations, it represents the time during which a photon is emitted, which is associated with the probability of the quantum process and the LPE.
In probability, the mathematical expectation is expressed as:
where:
(1) (n) is the number of repetitions of the experiment,
(2) (p) is the probability of success
Applying this concept to photon emission and considering that the maximum emission of photons is determined by the quantization:
where:
(1) N is the mathematical expectation of the number of photons emitted per second.
(2) n represents the total number of possible iterations (dimensionless).
(3) pL is the probability factor for detecting an emitted photon, calculated based on the measured value of
.
(4) λ is the wavelength of the emitted radiation (m).
The number of possible quantum events N for a power of one watt is determined by the quantization of energy levels, and is given by:
where n represents the number of possible quantum events per unit of energy and time, not a pure number h is Planck’s constant (6.626 × 10−34 J), and it represents the minimum unit of action. By taking the inverse of Planck’s constant h, we obtain a quantity that indicates the number of possible quantum events per unit of energy and time. Therefore, n is not a dimensionless number, but it is expressed in units of J−1⋅s−1, reflecting the relationship between energy, time, and quantum events. Therefore, n is not a pure number without units, as it is related to energy and time. This value represents the maximum possible number of emitted photons, although in practice, this cannot occur physically, as it would mean the system is generating photons with virtually zero energy due to the quantum nature of h. Thus, it can be interpreted as a measure of the fundamental iterations where h represents the system’s minimum quantum of action.
The Planck constant can be understood as the minimum action ratio: the time required for a single quantum event, such as the emission of a photon. If N is the number of photons emitted in a time T, then
, and energy is expressed as
, where
. This shows that h is not a mysterious value but rather a fundamental relationship between time and the number of quantum events.
(46)
Since the emission of photons in an electromagnetic wave is proportional to the wavelength 𝜆, we obtain:
The emission of photons in an electromagnetic wave is proportional to the wavelength λ. According to the LPE, the probability of detecting an emitted photon (pL) is estimated by the factor: 1 − (½)1/207944151, which corresponds to the median delay time, where the cumulative probability reaches ½. Thus, the final expression for the number of photons emitted per second is:
By substituting in the central factor with its value, we obtain:
Multiplying the first two factors, we obtain the final expression for N, which represents the mathematical expectation of iterations measured as the number of photons emitted per second for a constant power of 1 watt:
(47)
For other power levels, the applicable expression is:
Equation (47) is applicable in the cases considered, and the results are verified in Table 4 for a power of 1 watt, which shows examples of photon emission for different. In this case, (J. m)−1 is a measure that indicates the number of photons per unit of energy, meaning the rate or efficiency of photon emission. The photon emission rate can be interpreted as a measured quantity that indicates how many photons are emitted per unit of energy.
Table 4. Examples of photon emission for different wavelengths.
Wave types |
Wavelength (m) |
N (photons/s) |
X-rays |
1 × 10−9 |
5.03 × 1015 |
Visible light |
500 × 10−9 |
2.52 × 1018 |
Microwaves |
1 × 10−3 |
5.03 × 1021 |
These values verify the validity of the obtained expression and show how the LPE estimates photon emission as a probability function that quantifies the number of possible iterations.
The emitted energy is given by the product of Planck’s constant, the wave frequency, and the number N of photons produced:
By substituting the value of N obtained from equation (46) and the value of
solved from
, we obtain:
Simplifying:
(48)
Energy was normalized in Ω, without dimensions, to reflect its analogy with probability.
Example 4. For a wavelength of λ=550 nm, the associated frequency is ν = 5.45×1014 Hz. For a power of 1 watt, the number of photons per second (N), is calculated from equation 47 as: N = 5.0283×1024 × λ = 5.0283×1024 × 5.5 × 10−7 = 2.77 × 1018 photons/s. Then, the waiting time for the emission of a photon, Δt, is: Δt = 1/N = 1/2.77 × 1018 = 3.61 × 10−19 s. This is the waiting time or delay required to have the probability (p) of photon emission. Furthermore, if the energy of the photon at this wavelength is: E = h⋅ν = 6.626 × 10−34 × 5.45 × 1014 = 3.61 × 10−19 J.
As the energy is normalized to be dimensionless, it becomes evident that the energy E = 3.61 × 10−19 is numerically equal to the delay Δt = 3.61 × 10−19. This confirms the LPE, as the waiting time and the energy are equivalent within the normalized framework, meaning that accumulating probability is equivalent to accumulating energy, it reflects a fundamental symmetry in the LPE model: both the accumulation of energy and the accumulation of probability follow the same delay structure. Under this normalization (Ω), energy and delay become numerically indistinguishable.
The LPE models the emission event (A) with a probability p = ½ and the absorption event (
) with q = ½, reflecting their corpuscular nature. These events sum to 1, and their functions f(x) and f(y) collapse upon measurement, yielding (1,0). In contrast, F(X) and F(Y) are wave-like and entangled at the value ½. According to case 4.2.3., the maximum probability potential is |ψp| = 0.788605. At equilibrium, this yields 0.3943025. Thus,
. Finally, |ψp|= 0.788605 − 0.21140 = 0.57721 (Euler-Mascheroni constant, γ). Therefore, 0.57721 is the probability that would represent the minimum energy required for the system’s state change. This implies that the probability potential is zero, |ψp| = 0, and that the LPE explains photon emission from a probabilistic perspective, where the accumulation of probability is equivalent to the accumulation of energy.
With normalized, dimensionless energy in the range [0-1], probability can be defined as the ratio between quantum energy and relativistic energy:
(49)
This expression applies to photon spheres near a black hole, where the photon’s frequency is affected by gravitational redshift, altering its energy. Although the photon has no mass, due to spacetime curvature it exists in an extreme gravitational environment.
Example 5. A red visible light photon with frequency ν = 6 × 1014 Hz, has a quantum energy given by: E = h⋅ν = 6.626 × 10−34 × 6×1014 = 3.98 × 10−19 J. Near the photon orbit, it is influenced by the gravitational potential, and its relativistic energy is:
(50)
where: Rs = 2GM/c2 is the Schwarzschild radius of the black hole, and r is the radius of the photon sphere (1.5 × Rs)
If the black hole has a solar mass M = 2 × 1030 kg, then: Rs = (2 × 6.67 × 10−11 × 2 × 1030)/(3 × 108)2 = 2.95 × 103 m, thus, 1.5 × 2.95 × 103 = 4.425 × 103 m. The gravitational factor is: (1 − 2.95 × 103/4.425 × 103)1/2 = 0.577. Therefore, the relativistic energy of the photon is: E = 3.98 × 10−19/0.577 ≈ 6.90 × 10−19 J. Finally, the relative probability is:
The value 0.577, which is (γ) does not depend on the mass of the black hole. This value arises from a particular geometric configuration: when the radius r is 1.5 times the Schwarzschild radius Rs, that is, at the photon sphere orbit. Given that:
the gravitational factor in the photon’s relativistic energy is: (1 − Rs/r)1/2 = (1 − 2/3)1/2 = (1/3)1/2 ≈ 0.577. Therefore, any black whole (regardless of its mass) will have this same correction factor. These values confirm the validity of the LPE (cases 4.2.2. and 4.2.3.) and equilibrium at the maximum tensor value, which corresponds to the Euler-Mascheroni constant. Near a photon sphere, quantum effects remain significant even under extreme gravitational conditions. The fundamental symmetry proposed by the LPE connects quantum and relativistic laws. The energy difference between photons does not come from their corpuscular nature, which maintains the same speed in their travel, but from their wave structure: shorter wavelength implies higher energy, greater probability, and shorter delay, while greater redshift decreases energy and increases probability accumulation before emission, with centripetal-centrifugal equilibrium at γ ≈ 0.577.
The LPE demonstrates a fundamental equivalence, under normalization Ω, the energy potential
equals the probability potential
:
(51)
8. Conclusions
The LPE establishes a theoretical framework that reconciles chance and determinism, showing that both coexist and self-limit in equilibrium, where chance gives way to determinism at the cumulative probability point of ½. These are not opposing forces but complementary aspects of a unified reality. Probability, understood as relative frequency, behaves like a wave that transmits information and exhibits quantum-like properties: complementarity, superposition, uncertainty, collapse, duality, interference, and entanglement. Complementarity is expressed in the fact that mutually exclusive events A and
are both necessary to describe probabilistic reality. Superposition occurs because both states coexist in the sample space before observation. Uncertainty implies that the exact moment of occurrence cannot be predicted, though the LPE ensures the event will inevitably happen. Collapse occurs upon measurement: only one state is detected, but superposition remains as probabilistic equilibrium.
There is a duality between a priori probability (f(x), f(y)) and accumulated probability (F(X), F(Y)), associated with particle-wave duality. Interference between density functions f(x), f(y) is destructive, while cumulative functions F(X), F(Y) remain balanced at ½. Probabilistic entanglement means that functions are not independent but self-adjust in dynamic equilibrium. Relative frequency, as an empirical measure of probability, converges to itself. In this model, the maximum value of the cumulative function is ½, signaling the point of occurrence. The total value of 1 results from the sum of the complementary maxima. The LPE links probability with physical principles: cosmic fluctuations and black holes reflect structured randomness.
Finally, the LPE proposes that probability is inversely related to delay (the time until an event occurs), and that its accumulation is equivalent to energy accumulation. This unifies probabilistic behavior with physical principles. At equilibrium, the probability of photon detection is 3.33 × 10−9 s/m, and the estimated number of stars in the observable universe is 8.01 × 1026, suggesting a connection between quantum-scale events and cosmic structures.
Conflicts of Interest
The author declares no conflicts of interest.