Classical Correlations vs Quantum Correlations—Similarities, Differences, Opportunities ()
1. Classical Correlation
What are classical correlations? While this may seem like an easy question, it is not, since different correlation models and interpretations exist, depending on the type of science and the objective of the study. In classical physics, for example, correlation is often calculated as the product of two functions at different values and then averaged over the domain of the function. In finance, copula correlations, which reduce n marginal distributions to a single n-dimensional distribution are applied in risk management. In economics often stochastic correlations are employed to model uncertain economic variables such as inflation, unemployment or the GDP (Gross Domestic Product) growth rate.
Figure 1 shows more than anyone ever wanted to know about classical correlation models.
Figure 1. Classical correlation models.
From Figure 1, different classical correlation interpretations can be derived. First, in statistics, the term correlation is often narrowly defined, only referring to the linear Pearson correlation model, derived by Karl Pearson in 1900, as in Cherubini et al. (2004) [1], Nelsen (2006) [2], or Gregory (2010) [3]. Second, the term classical correlation can include all correlation concepts in Figure 1, in particular stochastic correlations, which we will contrast in section 5 with the fundamentally stochastic nature of quantum mechanics. In addition, in non-academic environments, the term correlation is often used non-quantitatively for the co-movement of two or more phenomena in time.
1.1. The Karl Pearson Correlation Model
The Karl Pearson correlation model, derived in 1900, is by far the most applied correlation model in classical statistical analysis and research. Therefore, we will contrast it with quantum correlation. Let’s first recall some of the key properties of the Pearson correlation model.
1.1.1. Statistical Independence and Pearson Correlation
Statistical Independence and Pearson Uncorrelatedness are sometimes used synonymously. However, applying standard definitions, this is not correct. Let’s contrast the two.
In statistics, two events are considered dependent if the occurrence of one event affects the probability of occurrence of another. Conversely, two events are considered independent if the occurrence of one event does not affect the probability of another event. Formally, two events A and B are independent if the joint probability equals the product of the individual probabilities:
(1)
where
stands for “and”.
Naturally, we can find the property of Equation (1) in reality: If we toss a fair coin once, the probability of “Heads” is 0.5. Tossing the coin once more, the probability of “Heads” is again 0.5. Since the outcomes of the two tosses are independent, the probability of getting “Heads” twice in two tosses is simply the multiplication of the probability of the outcomes, i.e., 0.5 × 0.5 = 0.25. Or let’s assume the default probability of company A is 0.1 and the default probability of company B is 0.2. If the default probabilities are independent, the joint default probability of company A and B is 0.02.
Expressing statistical independence with conditional probabilities, we solve Equation (1) for P(A) and get
Following the Kolmogorov definition
, we derive
(2)
where P(A|B) is the conditional probability of A with respect to B. In Equation (2) the probability of A, P(A), is not affected by event B, since P(A) = P(A|B), hence the event A is statistically independent from B. From Equation (1) we can perform the same exercise for the probability of event B, P(B), which is independent from event A.
1.1.2. Does Statistical Independence Imply Pearson Uncorrelatedness?
Let’s discuss whether statistical independence, as defined in Equations (1) or (2) implies Uncorrelatedness of the Pearson correlation model.
In the Pearson model, the covariance measures how two random variables “co-vary” together. More mathematically, correlation in the Pearson model calculates the linear strength and direction of the association between two random variables. For example, the height and weight of human beings are positively correlated, since smaller humans are on average lighter and taller humans on average heavier. Formally, the Pearson covariance is
(3)
where E(X) and E(Y) are the expected values of (X) and (Y) respectively, also known as the arithmetic mean. E(XY) is the expected value of the product of the random variables X and Y. The covariance in Equation (3) is not easy to interpret. Therefore, often a normalized covariance, the correlation coefficient is often used. The Pearson correlation coefficient ρ(X, Y) is defined as
(4)
where σ(X) and σ(Y) are the standard deviations of X and Y respectively. While the covariance takes value between −∞ and +∞, the correlation coefficient conveniently takes values between −1 and +1.
From Equation (1) above we find that the condition for statistical independence for two random variables is E(XY) = E(X)E(Y). From Equation (4) we see that E(XY) = E(X)E(Y) is equal to a covariance of zero. Therefore, if two variables are statistically independent, their Pearson covariance is zero.
1.1.3. Does Uncorrelatedness Imply Statistical Independence?
In the previous section, we showed that statistical independence implies Pearson uncorrelatedness. Is the reverse also true, i.e. does Uncorrelatedness in the Pearson model imply statistical independence? The answer is no. This can be shown easiest by proof of example:
For the parabola Y = X2, Y is perfectly dependent on X. However, the Pearson correlation of the function Y = X2 derived by Equations (3) or (4) is zero. Starting with Equation (3)
and inputting Y = X2, we derive
Let X be a uniform variable bounded in [−1, +1]. Then the means E(X) and E(X3) are zero and we have = 0 − 0 E(X2) = 0.
In conclusion, the Pearson covariance or correlation coefficient can give values of zero, i.e. tells us the variables are uncorrelated, even if the variables are statistically dependent! This is because the Pearson correlation concept only includes the first two moments of a distribution and therefore only measures linear dependence. For more limitations of the Pearson correlation model, see Meissner (2019) [4].
2. Quantum Entanglement (Also Termed Quantum
Correlation)
In 2022 the physics Nobel Prize was rewarded to Alan Aspect (1982) [5], John Clauser (1969, 1972) [6] [7] and Anton Zeilinger (1997, 1998, 2000, 2003) [8]-[11] for their work on Quantum Entanglement. Other significant work on Quantum Entanglement include Bei (2003) [12], Courtemanche and Crépeau (2021) [13], and Metwally and Ebrahim (2024) [14].
Quantum entanglement is the property that two paired particles can communicate instantaneously even if separated by large distances. Once measured, quantum entanglement disappears.
More specifically, two paired opposite spin electrons are produced, for example by nuclear fission. The electrons drift apart. If one electron’s spin is measured, the other particle’s spin will be instantaneously opposite. This property was famously called “Spooky action at a distance” by Einstein. Quantum entanglement is displayed graphically in Figure 2.
Figure 2. Example of Quantum Entanglement: In a physical process in t0, two paired particles e1 and e2 are produced, which drift apart. If particle e1 is measured in t1 with spin-up, instantaneously, the spin of particle e2 will be spin-down, and vice versa.
While there is no dispute about the property of quantum entanglement, there is disagreement how the process of quantum entanglement exactly works. In 1935, Albert Einstein, together Boris Podolsky and Nathan Rosen, proposed that there must a hidden variable, which determines the spin states, and therefore quantum mechanics is incomplete. However, this theory was later falsified in several experiments, see Freedman and Clauser (1972) [7], Aspect, Dalibard, Roger [5] in 1982, or Pan, Bouwmeester, Daniell, Weinfurter and Zeilinger (2000) [10].
3. Contrasting Pearson Product Correlation and Quantum
Product Correlation
In the following we will contrast the classical Pearson product correlation and quantum product correlation on the basis of the CHSH inequality [6], a proof of Bell’s theorem that quantum entanglement cannot be explained by “hidden variables”, which Einstein, Podolsky and Rosen had suggested. The experimental setup is shown in Figure 3.
Figure 3. CHSH model. The source S produces two photons, which drift to sides A and B. The photons encounter the polarizers at certain angles a and b, which are set by the experimenter. The photons pass the polarizers with state D+ and are reflected with state D−. The coincidence monitor CM collects the outcomes.
The Bell inequality typically has the form
(5)
where
(6)
and E is the expectation value of particle pair quantum correlation for combinations of the polarization angles a, a', b and b', defined as the product of the A(a) B(b) outcomes. A second Bell test experiment by Alan Aspect [15] in 1982 and several subsequent tests have all violated the Bell inequality, falsifying the hidden value theory.
3.1. Applying the Classical Pearson Correlation to the CHSH
Experiment
In the following, we will apply the law of angular momentum which implies for entangled particles that they have opposite spin if measured in the same direction. Since the angles a and b are independent from each other, we can apply Equation (1) to measure the joint probability as the product of the individual probabilities.
If the angles a and b are aligned (i.e., identical), the Pearson correlation will be +1, however, due to angular momentum −1. If the angles are orthogonal, the Pearson correlation will be 0. If the angles are anti-aligned (i.e., π if measured in radians), the Pearson correlation will −1, however, due to angular momentum +1. If the angles are 2π, we are back at the beginning with aligned angles. The Pearson correlation function PC is algebraically
(7)
where θ is the difference in the angles a and b. Equation (7) is displayed in Figure 4 as the red function.
Figure 4. Pearson correlation vs Quantum correlation of the CHSH model. The abscise displays the difference in the angles, displayed in radians.
3.2. Quantum Correlation of the CHSH Experiment
The two photons which are emitted from the source S are each in a superposition of spin-up and spin-down state:
(8)
We are again interested in the product correlation of outcomes A and B,
. The outcomes of A and B are discrete, each can take a value of +1 and −1. Hence, we have
(9)
where Pa,b(A, B) is the probability of outcome A respectively B, given the angles a and b.
The researchers measure the observables a ∙ σa and b ∙ σb, where σa and σb are linear combinations of Pauli matrices
Then the product correlation
is the state vector times the observables
Applying angular momentum and the identity
, we derive
(10)
where
is again the difference between the angles a and b. Equation (10) is displayed in Figure 4 as the blue function.
Conclusion:
For trivial angle combinations, i.e., when the angles are aligned (0, 2π), perpendicular (0.5π, 1.5π) or anti-aligned (π), Pearson correlation and quantum correlation are identical. However, for all other angle combinations, quantum correlation (or quantum entanglement) is stronger than classical Pearson correlation. This is true for positive correlation and negative correlation. Hence the classical Pearson correlation model, which is applied in most classical statistical research, cannot explain the strong quantum correlation.
[Classical] Correlations always increase in distressed times.
John Hull
4. Contrasting Classical Correlation Matrices with Quantum
Density Matrices
4.1. Classical Correlation Matrices
In this section, we will analyze classical Pearson correlation matrices applied in many sciences, in particular in economics and finance, and relate them to quantum density matrices.
Table 1 shows a monthly stock price return correlation matrix of the 30 Dow Jones Industrial stocks. The matrix is symmetrical, i.e., ρij = ρji, so we can concentrate on the upper right or lower left triangle. The matrix is derived with the steps: a) Store the end-of-day prices S at day t of each Dow stock for a certain month, b) derive the stock return Sr for each stock by applying
Table 1. Classical Dow return correlation matrix of March 2020.
|
3M |
AAPL |
AXP |
IBM |
BA |
CAT |
CVX |
CSCO |
KO |
DD |
XOM |
GE |
GS |
HD |
INTC |
JNJ |
JPM |
MCD |
MRK |
MSFT |
NKE |
PFE |
PG |
TRV |
UNH |
UTX |
VZ |
V |
WMT |
DIS |
3M |
1.00 |
0.91 |
0.78 |
0.89 |
0.55 |
0.85 |
0.62 |
0.9 |
0.77 |
0.74 |
0.77 |
0.76 |
0.8 |
0.73 |
0.66 |
0.84 |
0.75 |
0.79 |
0.74 |
0.83 |
0.83 |
0.8 |
0.81 |
0.53 |
0.78 |
0.64 |
0.73 |
0.86 |
0.67 |
0.85 |
AAPL |
|
1.00 |
0.86 |
0.93 |
0.71 |
0.86 |
0.75 |
0.93 |
0.82 |
0.85 |
0.84 |
0.88 |
0.92 |
0.89 |
0.89 |
0.82 |
0.92 |
0.82 |
0.81 |
0.97 |
0.82 |
0.87 |
0.82 |
0.73 |
0.87 |
0.71 |
0.8 |
0.93 |
0.73 |
0.88 |
AXP |
|
|
1.00 |
0.8 |
0.79 |
0.76 |
0.88 |
0.77 |
0.71 |
0.88 |
0.87 |
0.87 |
0.95 |
0.83 |
0.73 |
0.7 |
0.93 |
0.88 |
0.71 |
0.85 |
0.85 |
0.68 |
0.68 |
0.74 |
0.88 |
0.87 |
0.61 |
0.95 |
0.47 |
0.88 |
IBM |
|
|
|
1.00 |
0.72 |
0.87 |
0.71 |
0.89 |
0.88 |
0.82 |
0.87 |
0.88 |
0.84 |
0.84 |
0.84 |
0.85 |
0.84 |
0.75 |
0.83 |
0.9 |
0.86 |
0.88 |
0.84 |
0.71 |
0.83 |
0.7 |
0.81 |
0.86 |
0.73 |
0.84 |
BA |
|
|
|
|
1.00 |
0.63 |
0.74 |
0.61 |
0.69 |
0.69 |
0.75 |
0.8 |
0.77 |
0.79 |
0.69 |
0.49 |
0.75 |
0.7 |
0.53 |
0.68 |
0.75 |
0.6 |
0.47 |
0.66 |
0.75 |
0.8 |
0.48 |
0.75 |
0.29 |
0.74 |
CAT |
|
|
|
|
|
1.00 |
0.7 |
0.82 |
0.78 |
0.82 |
0.88 |
0.85 |
0.82 |
0.7 |
0.71 |
0.78 |
0.83 |
0.71 |
0.73 |
0.82 |
0.78 |
0.78 |
0.73 |
0.58 |
0.74 |
0.7 |
0.71 |
0.83 |
0.59 |
0.84 |
CVX |
|
|
|
|
|
|
1.00 |
0.59 |
0.64 |
0.9 |
0.87 |
0.85 |
0.84 |
0.78 |
0.66 |
0.6 |
0.88 |
0.84 |
0.75 |
0.76 |
0.75 |
0.54 |
0.48 |
0.79 |
0.84 |
0.89 |
0.49 |
0.86 |
0.26 |
0.74 |
CSCO |
|
|
|
|
|
|
|
1.00 |
0.71 |
0.73 |
0.71 |
0.8 |
0.84 |
0.78 |
0.82 |
0.82 |
0.81 |
0.76 |
0.76 |
0.91 |
0.79 |
0.88 |
0.85 |
0.61 |
0.75 |
0.63 |
0.82 |
0.85 |
0.78 |
0.85 |
KO |
|
|
|
|
|
|
|
|
1.00 |
0.73 |
0.78 |
0.8 |
0.73 |
0.73 |
0.72 |
0.83 |
0.74 |
0.59 |
0.75 |
0.77 |
0.79 |
0.85 |
0.79 |
0.59 |
0.82 |
0.68 |
0.78 |
0.76 |
0.59 |
0.73 |
DD |
|
|
|
|
|
|
|
|
|
1.00 |
0.9 |
0.92 |
0.88 |
0.72 |
0.71 |
0.74 |
0.93 |
0.78 |
0.78 |
0.82 |
0.79 |
0.69 |
0.68 |
0.7 |
0.8 |
0.82 |
0.62 |
0.89 |
0.43 |
0.79 |
XOM |
|
|
|
|
|
|
|
|
|
|
1.00 |
0.91 |
0.88 |
0.81 |
0.7 |
0.67 |
0.88 |
0.83 |
0.74 |
0.83 |
0.88 |
0.65 |
0.61 |
0.76 |
0.86 |
0.84 |
0.62 |
0.89 |
0.45 |
0.83 |
GE |
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.86 |
0.81 |
0.79 |
0.75 |
0.93 |
0.83 |
0.81 |
0.88 |
0.89 |
0.77 |
0.68 |
0.76 |
0.84 |
0.85 |
0.69 |
0.93 |
0.5 |
0.79 |
GS |
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.89 |
0.81 |
0.69 |
0.95 |
0.85 |
0.71 |
0.91 |
0.81 |
0.72 |
0.73 |
0.77 |
0.85 |
0.81 |
0.69 |
0.92 |
0.56 |
0.93 |
HD |
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.87 |
0.63 |
0.85 |
0.83 |
0.72 |
0.9 |
0.78 |
0.72 |
0.66 |
0.87 |
0.86 |
0.74 |
0.67 |
0.85 |
0.59 |
0.8 |
INTC |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.69 |
0.85 |
0.62 |
0.75 |
0.94 |
0.63 |
0.84 |
0.75 |
0.8 |
0.78 |
0.56 |
0.8 |
0.76 |
0.78 |
0.71 |
JNJ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.74 |
0.61 |
0.87 |
0.81 |
0.72 |
0.93 |
0.91 |
0.56 |
0.76 |
0.64 |
0.85 |
0.8 |
0.74 |
0.66 |
JPM |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.81 |
0.79 |
0.93 |
0.8 |
0.76 |
0.72 |
0.8 |
0.86 |
0.83 |
0.71 |
0.94 |
0.55 |
0.84 |
MCD |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.74 |
0.8 |
0.88 |
0.57 |
0.55 |
0.77 |
0.82 |
0.86 |
0.53 |
0.92 |
0.4 |
0.81 |
MRK |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.85 |
0.76 |
0.83 |
0.75 |
0.8 |
0.84 |
0.73 |
0.86 |
0.82 |
0.65 |
0.62 |
MSFT |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.79 |
0.86 |
0.8 |
0.82 |
0.87 |
0.71 |
0.84 |
0.91 |
0.77 |
0.83 |
NKE |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.7 |
0.67 |
0.7 |
0.86 |
0.85 |
0.68 |
0.92 |
0.47 |
0.8 |
PFE |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.92 |
0.62 |
0.76 |
0.58 |
0.91 |
0.77 |
0.82 |
0.67 |
PG |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.57 |
0.69 |
0.51 |
0.87 |
0.73 |
0.84 |
0.66 |
TRV |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.83 |
0.75 |
0.67 |
0.78 |
0.55 |
0.56 |
UNH |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.84 |
0.77 |
0.92 |
0.57 |
0.76 |
UTX |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.53 |
0.89 |
0.24 |
0.74 |
VZ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.71 |
0.83 |
0.61 |
V |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.55 |
0.86 |
WMT |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
0.51 |
DIS |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.00 |
(11)
c) after eliminating the 1’s on the diagonal, derive the pair-wise Pearson correlation between all returns applying Equation (4). The result is shown in the classical correlation matrix in Table 1.
The average Dow correlation between the Dow stock returns of March 2020, derived by equation
was 76.53%. This monthly correlation is relatively strong since March 2020 was the height of the Corona Virus crisis and many stocks declined simultaneously, in particular high-tech stocks. As seen from Table 1, the Pearson correlation between AAPL and MSFT was 97%, between AAPL and CSCO 93%. In comparison the average monthly Dow correlation from January 1972 to December 2022 was 36.66%. From Table 1 we also observe that all pairwise correlations were positive, i.e., all 30 Dow stocks declined in March 2020. (A positive pair-wise correlation can also mean that both stocks increased, however, from the price movements in March 2020, we know that stock prices declined).
Table 2 confirms that financial correlations typically increase in distressed times.
Table 2. Dow correlation level and correlation volatility from January 1972 to December 2022.
|
Correlation level |
Correlation volatility |
Expansionary period |
27.46% |
71.17% |
Normal economic period |
33.06% |
82.51% |
Recession |
36.96% |
81.88% |
Correlation volatility in Table 2 is derived by applying the typical standard deviation equation , where
is the average of the monthly correlation averages. Correlation volatility is lowest in economic expansions and higher in normal economic times and in a recession. We expected correlation volatility to be highest in a recession, however, it seems to stabilize in a recession at relatively high levels.
Importantly, the higher the non-diagonal elements in a classical correlation matrix as in Table 1, the higher the correlation between the variables. We will contrast this property with quantum density matrices in the following section.
4.2. Quantum Density Matrices
Density matrices are the most general description of a quantum state. Both pure and mixed states can be described by quantum matrices.
4.2.1. Pure States
A pure quantum state is a state where we have full information of the system. It can be written as the outer product of the state vector
with itself.
(12)
where ρ(P) is the density operator of the pure state
. A state is pure if
1) It can be written by Equation (12);
2) It is idempotent, i.e., ρ = ρ2;
3) It is a projection, in particular of rank 1;
4) The trace of the density operator squared is 1, i.e., tr(ρ2) = 1.
Let’s assume we have a qubit in a superposition of
and
, which can be written as the ket-vector
(13)
The normalization condition for a pure state is that the modulus of the sum of the square of the probability amplitudes α and β add up to 1,
(14)
Hence, it follows that the pure state density matrix is
(15)
where α* and β* are the complex conjugate of α and β.
The terms on the diagonal of the matrix (15) are the population terms, giving us the probabilities of being in state
or
. The off-diagonal terms are coherence terms, providing information about the interference between the amplitudes of the states
and
. We will interpret the density matrix further at the end of the next section.
4.2.2. Mixed States
Mixed states are a probability weighted ensemble of pure states. Mixed states arise when the preparation is weak, i.e., we don’t have full information of the system, or when we want to describe a physical system that is entangled. Hence, Equation (12) generalizes to
(16)
where ρ(M) is the density operator of the mixed state and the
are classical probabilities with
.
Let’s assume we have a system that produces the state
with 50% probability and
with 50% probability. From Equation (16), the density matrix for this mixed state is
(17)
The matrix (17) is termed a “maximally mixed state”, represented on the Bloch’s sphere with just a point at the origin, whereas pure states lie on the surface of a unit Bloch’s sphere, see Figure 5.
Conclusion:
We observe that the trace of the classical correlation matrix C in Table 1, Tr(C)
, where n is the number of variables of the matrix. The terms on the diagonal of the quantum matrices (15) and (17) are the population terms, giving
Figure 5. Bloch’s sphere. Pure states lie on the unit surface, while mixed states lie within the surface, as the state vector
.
us the probabilities of being in state
or
. The sum of traces of the pure quantum state matrices (15) and (17) are ρ(P) = ρ(M) = 1, verifying that ρ(P) and ρ(M) are valid quantum states. We also verify that the conditions of the pure and mixed matrices Tr(ρ(P)) = 1 and Tr(ρ(M)) < 1 are satisfied. Altogether, there are no similarities between the traces of classical and quantum density matrices.
With respect to the off-diagonal elements, for the classical correlation matrix in Table 1, the higher the off-diagonal elements, the higher the correlation, bounded between −1 and +1.
In classical physics, the off-diagonal elements are coherence terms, describing the properties of the correlation between physical quantities. The coherence concept is closely related to the Pearson correlation concept. For example, in signal processing, the coherence function between two signals x(t) and y(t) is
(18)
where Sxy is the cross-sectional density function and Sxx and Syy are the power spectrum density functions of the signals x(t) and y(t). Equation (18) is bounded between −1 for perfect negative correlation and +1 for perfect positive correlation. Hence it is equivalent to the Pearson correlation coefficient function (4).
As in classical physics, in quantum mechanics, the off-diagonal elements of density matrices (see quantum matrices (15) and (17)) are coherence terms. The concept of coherence is closely related to entanglement. Coherent waves display the properties of correlation between waves or wave packets, i.e., show well-defined constant phase relationships. In 2015 Streltsov et al. [13] proved that every entanglement measure can be constructed as a coherence measure. Tan et al. showed in 2018 [14] that the converse is also true and concluded that quantum coherence is equivalent to quantum entanglement. Hence, while higher off-diagonal terms of the classical correlation matrix in Table 1 represent higher correlation between the variables, higher the off-diagonal entries of quantum correlation matrices represent higher association between the relative phase of quantum systems.
5. Contrasting Classical Stochastic Correlation with Dynamic
Quantum Matrices
In this section we will contrast the widely applied classical dynamic (i.e., time evolving) stochastic correlation, with the dynamic quantum matrices.
5.1. Classical Stochastic Correlation
A stochastic process is the result of a random experiment. It can simply be defined as the collection of random variables X at time t: {Xt, t ∈ T} on the probability space (Ω, F, P), where Xt is the state of the random variable on the index set T.
Stochastic processes are applied in virtually all logical, physical, biological and social sciences. One of the simplest stochastic processes is the Brownian motion, which describes the dissemination of particles in a liquid, for example, charcoal in water. The Brownian motion is credited to the botanist Robert Brown in 1827. However, it was actually the Dutch biologist Jan Ingenhousz who published papers on the dispersion of coal dust particles in alcohol in 1784 and 1785, however, unfortunate for him not in English. The mathematical formulation of the Brownian motion was derived by Louis Bachelier in his doctoral thesis “Théorie de la spéculation” in 1900. Albert Einstein, in his “annus mirabilis” in 1905 applied the Brownian motion in his paper “Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen” to prove the existence of atoms. In 1997, the Nobel Prize for economics was given to Robert Merton and Myron Scholes for their option pricing model, in which the underlying instrument (a stock price) is modeled with equation
(19)
where
S: variable of interest (a stock price in the Black1-Scholes-Merton model);
μ: the expected growth rate of S;
σ: the expected volatility of S;
εt: random drawing from a standard normal distribution at time t, i.e., ε ~ n(0, 1).
The term
in Equation (19) is referred to as a Brownian motion. Including a drift term μ dt is typically called a generalized Brownian motion. In Equation (19) the relative change of S, dS/S, follows a geometric Brownian motion, referred to as a generalized geometric Brownian motion. Since dS/S follows a generalized geometric Brownian motion, S itself is lognormally distributed. However, from empirical studies of classical correlation we know that correlation is approximately normally distributed. In addition, we also know that classical correlation has strong mean reversion (see Meissner (2019) [4]). Furthermore, the Pearson correlation coefficient is bounded between −1 and +1. So for classical correlation, Equation (19) changes to the bounded Jacobi process
(20)
where
ρ: Pearson correlation coefficient as defined in Equation (4);
a: mean reversion speed (gravity), i.e. degree at which the correlation at time t, ρt, is pulled back to its long term mean mρ. a can take the values 0 ≤ a ≤ 1;
mρ: long-term mean of the correlation ρ;
σρ: correlation volatility;
h: upper boundary level;
f: lower boundary level, i.e. h ≥ ρ ≥ f;
other variables defined as in Equation (19).
Applying the upper and lower limit for a correlation coefficient f = −1 and h = 1, Equation (20) reduces to
(21)
Figure 6 shows 10 bounded Jacobi processes of Equation (21).
Figure 6. 10 bounded Jacobi simulations of Equation (21) with the parameter values a = 0.02, mρ = 0.1, and σρ = 50%.
Due to the stochasticity, no closed form solution of Equation (21) is possible. To derive a correlation forecast, multiple (e.g., a million) simulations are run. Then the arithmetic average of the final value of each simulation is taken and interpreted as the forecast.
Dynamic Classical Correlation Matrix
The stochastic processes (19) to (21) model a single variable in time. We will now expand these processes to model classical correlation matrices and then contrast them with dynamic quantum density matrices.
The following derivation is based on the seminal Heston 1993 model, which elegantly derives the negative correlation between a stock price with its volatility by correlating the Brownian motions, a variation is applied in Equation (25). We also apply the Wishart Affine Stochastic Correlation model (WASC) introduced by Bru (1991) [16] and extended by Gourieroux and Sufana (2004) [17], as well as correlation modeling by Buraschi, Porchia and Trojani (2010) [18], (first version of the paper appeared in 2006), and Da Fonesca, Grasselli and Ielpo (2008) [19].
The model is presented as an n-dimensional stochastic process of covariance matrices. For ease of exposition, we will concentrate on n = 2 assets. In this case, S in Equation (19) expands to a price vector of 2 assets, S1 and S2, formally S = (S1, S2)T. The stochastic process for S is
(22)
where
IS = Diag[S1, S2], i.e. a diagonal 2 × 2 matrix;
µ: expected growth rate of the 2-dimensional vector S;
Σt: Covariance matrix of the returns of asset S1 and S2;
dZt: 2-dimensional Brownian motion.
In our 2-asset case, the covariance matrix Σt in time takes the form
(23)
where
and
are the variance of the returns of asset 1 and asset 2, respectively, and
and
are the covariances of the returns of assets 1 and 2. Note that the Covariance of 2 assets is commutative, i.e. Covariance (ij) = Covariance (ji), hence in the matrix (23)
, as already displayed in Table 1.
Let the covariance matrix (23) follow a stochastic process of the form
(24)
where
Q: volatility of co-volatility matrix
;
M: negative semi-definite matrix, which controls the degree of mean reversion of Σt, corresponding to “a” in Equation (21);
Ω: related to the long term mean of the covariance matrix Σt, corresponding to mσ in Equation (21);
W: two-dimensional Brownian motion.
In the original Heston model, the stochastic process for the underlying asset S and the stochastic process of the volatility σ are correlated by correlating the Brownian motions. Accordingly, the Brownian motions of Equations (22) and (24) are correlated:
(25)
where dW(t) and dB(t) are independent, and dZ(t) and dZ(t') are independent, t ≠ t'.
The model admits a closed form solution for the correlation between the underlying return assets S and its variance Σ. For asset 1 we have
(26)
For asset 2,
(27)
If we assume no correlation between the volatility of volatility of asset 1 and the volatility of volatility of asset 2, then Q12 = Q21 = 0. In this case the matrix Q just has diagonal entries and Equation (26) reduces to
(28)
Equation (27) reduces to
(29)
In reality we observe a negative relationship between returns S and its variance Σ, sometimes called “leverage”. This can be modeled for asset 1 with ρ1 < 0 and for asset 2 with ρ2 < 0.
The model presented above has numerous parameters and can therefore replicate financial reality well. Especially the negative relationship between returns and volatility (sometimes called “leverage”) and the higher correlation in a recession (sometimes referred to as “asymmetric correlation”) can be modeled. In addition, volatility skews (i.e. higher volatility when returns are negative) and the right balance between correlation persistence correlation mean-reversion can be modeled.
The drawback of the model lies in its relative mathematical and computational complexity. This may limit its application in reality.
5.2. Dynamic (I.e., Time Evolving) Quantum Density Matrices
In classical physics, the time evolution of a system is governed by Newton’s second law of motion, now called momentum,
(30)
where
F: Force;
m: mass, assumed constant;
a: time derivative of velocity, i.e., acceleration.
The force F and acceleration a have magnitude and direction, i.e., are vector quantities. The process of Equation (30) is fully deterministic and not impacted by measurements. Limitations of Equation (30) are objects at high speeds (special relativity), highly massive objects (general relativity) and atomic and subatomic particles (quantum mechanics).
In quantum mechanics, the conceptual analogue of Newton’s second law of motion is the 1933 Nobel prize rewarded Schrödinger equation,
(31)
where
i: imaginary unit;
: reduced Planck constant, i.e., h/2π; h = 6.62607015 × 10−34 m2 kg/s;
Ψ(t): time dependent wave function of the quantum system;
: Hamiltonian, operator corresponding to the total energy of the system, assumed constant.
As seen from Equation (31), the Schrödinger equation is a linear partial differential equation, which governs the time evolution of the wave function Ψ(t) in a quantum system. The Schrödinger equation is a postulate, i.e., cannot be derived by principles of physics, however its predictions have been confirmed theoretically and empirically.
The time evolution of a system in the Schrödinger picture is deterministic until measured, as shown in Figure 7.
Figure 7. Time evolution of the wave function
in the Schrödinger picture.
In Figure 7, the new state of the system
is derived instantaneously at the time of the measurement t1. The probability of deriving a certain eigenvalue λm associated with the state um is
, where vm is the state of the system according to the eigenvalue equation
, and
represents the general property of the system.
We are interested in the time evolution of a quantum density matrix. This is accomplished by the von-Neumann equation, which can be derived from the Schrödinger equation:
Simplifying notation, i.e.,
, and differentiating Equation (16) using the product rule, we get
(32)
Rearranging the Schrödinger Equation (31) we have
(33)
Taking the complex conjugate, the Ket vector (33) turns into the Bra,
(34)
Inputting Equations (33) and (34) into (32), and using Equation (16), we get
(35)
where
is the commutator expression.
In the Schrödinger picture presented here, the state vector is a function of time, Ψ(t) and the Hamiltonian H, which represents the total energy of the system, is constant. Other pictures which describe the time evolution of a quantum system are the Heisenberg and the Interaction picture. In the Heisenberg picture the state vector Ψ is constant, but the Hamiltonian is time dependent HH(t). In the Interactive picture, both the wave function and the Hamiltonian are time dependent, ΨI(t), HI(t). All three pictures result in the same time evolution, see the Appendix for a proof.
“The first sip out of the cup of nature science is atheistic, but at the bottom God awaits” credited to Werner Heisenberg.
We are interested in the correlation between the energy levels of quantas. Therefore, the Heisenberg picture is relevant. For a two-quanta system, we apply the classical correlation coefficient matrix, displayed in Table 1 and the akin dynamic covariance matrix (23) (see Equation (4) for the correlation coefficient-covariance relationship),
(36)
If the Eigen-energies in the Heisenberg picture are E1 and E2
(37)
it follows that the Heisenberg picture of motion for the density matrix is
(38)
Conclusion:
In many classical sciences, variables are modeled and forecast with a stochastic process. A simple stochastic process is the Brownian motion (Equations (20) and (21)). Due to its stochastic nature, no closed form solution is available. Rather, multiple simulations of the stochastic process are run (Figure 5), the final results are stored and averaged. In quantum mechanics, the time evolution of a particle is governed by the Schrödinger equation, which is deterministic until a measurement is taken, which is probabilistic. Closed form solutions of the Schrödinger equation have been found for simple systems, for example particle in a box environments, or the Hydrogen atoms. However, no closed form solution exists for complex systems, such as the Helium atom and other multielectron systems.
Dynamic (i.e., time-evolving) classical correlation matrices can also be modeled (equation 24). The variables of interest are typically correlated by applying the seminal Heston 1993 approach, i.e., correlating their Brownian motions (equation 25). The diagonal elements represent the variance of the variable. The diagonal elements of the dynamic density matrix in the Heisenberg picture (matrix 38) are naturally zero, since there is no change in correlation between identical eigen-energies. The higher the off-diagonal elements of a classical correlation matrix, the higher the correlation between the variables (matrix 23). High off-diagonal elements of the dynamic density matrix in the Heisenberg picture (matrix 38) imply rapid interstate transitions between the energy states, thus a high association between the energy states.
6. Opportunities
The application of classical statistical analyses in quantum mechanics is nonsensible, since quantum mechanics follows its own physical laws. However, vice versa, numerous applications of quantum mechanics properties in the macro static world are currently applied. This is referred to as “The Second Quantum Revolution”. We will discuss several applications in this section.
In classical computing, bits take values of either 0 or 1. In quantum computing, qubits can be in a superposition of linear combinations of
and
at the same time, represented by coordinates of the state vector Ψ within the Bloch’s sphere and on the Bloch’s sphere, see Figure 5. Even if we only harness two states of a qubit, for n cubits we have 2n states, compared to only 2n states for classical bits. Quantum entanglement can then leverage the efficiency of the superpositions. This can be even further enhanced by quantum swapping, displayed in Figure 8.
Figure 8. Two entangled pairs, pair 1 and pair 2 are produced. One part of each pair, 2 and 3, are brought together and entangled. Therefore, pairs 1 and 4 are now also entangled.
Quantum swapping was first achieved by Pan, Bouwmeester, Weinfurter and Zeilinger in 1998 [19].
An elegant way to apply properties of quantum mechanics is quantum cryptography. A shared quantum key is distributed between parties A and B, which allows them to receive and decode information. When a third-party C tries to eavesdrop, the impact of the party C will disturb and decohere the system, making eavesdropping of the collapsed system impossible. In addition, party A and B will notice the decoherence and the eavesdropping attempt. For more on quantum cryptography see Jennewein, Simon, Weihs, Weinfurter, and Zeilinger (2000) [20].
A further practical application of quantum mechanics is quantum metrology, i.e., performing highly sensitive measurements, which are necessary in a GPS system. A user of a GPS communicates with four satellites. Three determine the location of the user, one adjusts for a time dilation difference: Time slows down on the satellite due to the velocity of the satellite, as predicted by special relativity. However, time also slows down on earth due to earth’s gravitational pull, following general relatively. The critical question is what has a stronger time dilation effect. It turns out that time runs about 45 microseconds per day faster on the satellite.
These sensitive measurements can be performed applying quantum mechanics’ properties. Since 1967 time is defined not as a fraction of a year, but an official second is equivalent to 9,192,631,770 oscillations of a cesium atom. Optical clocks, which apply atoms or ions, have a drift of about 1 second in 15 billion years, approximately the age of the universe. Applying ytterbium quantum entanglement and spin-squeezing, the drift may be improved to 1 second in every 15 trillion years. For details see Giovannetti et al. (2011) [21], Nicholson et al. (2015) [22] or Colombo et al. (2022) [23].
“Beam me up Scotty”
Famous phrase from “StarTrack”
Quantum teleportation is an intriguing field, promising teleportation of macro static objects, in particular human beings. Unfortunately, this does not seem likely. Quantum information teleportation by employing entangled states has been achieved though. It is done with the following steps:
1) Three qubits are produced. Qubit 1, possibly a photon in state
is the qubit, who’s information will be teleported. Qubit 2 and 3 are produced and entangled in a Bell state with four possible outcomes.
2) Qubit 1 and 3 are measured in location A. Cubit 2 is sent to location B.
3) Qubit 1 and 3 are encoded into two classical bits (which also have 4 outcome combinations).
4) Using a classical channel (such as a Fiber optic channel), the two classical bits are sent from location A to location B. This limits the speed of the teleportation to lightning speed.
5) At location B, the Bell entangled qubit 2 is in one of four possible states. Qubit 1’s state |φ〉 is revealed to the location B. Qubit 2 will be modified to match Qubit 1’s state, or stay unchanged if it already matches Qubit 1’s state.
Hence, the unknown state of Qubit 1 is teleported from location A to location B without sending the state directly. Quantum information teleportation is displayed in Figure 9.
Figure 9. Graphical display of quantum information teleportation.
Quantum teleportation was first achieved by a group led by Sandu Popescu, and a group led by Anton Zeilinger in 1997 [8]. In 2019, a group of Anton Zeilinger and Manuel Erhard was able to teleport Qutrits, i.e., three quantum states, see Luo et al. 2019 [24]. The method proposed can be extended to teleport arbitrarily high-dimensional quantum states.
Why is the teleportation of macro static objects like humans unlikely? We have to first recall that matter is not teleported, but information. So first the human body would need to be scanned, then the information transferred to the new location, where the human would be reassembled. This is already a daunting task considering the human being has about 7 × 1027 atoms, which are constantly evolving. The strongest limitation to human teleportation are two physical laws. One is the observer effect. Scanning would disturb the information of the human body, therefore the original information would not be preserved. The second is the Heisenberg uncertainty principle. It is not possible to precisely extract all information of the composition of a human being. So for now we have to leave human teleportation to the Star Trek crew.
There are numerous other promising fields in which properties of quantum mechanics are being applied. In medicine, QM technology is used in imaging, in particular MRI scanning. Furthermore, QM technology is applied in medical diagnosis, particularly in dermatology. In a 2017 study by Esteva et al. [25] a dataset of 129,450 clinical images with two critical binary classifications: keratinocyte carcinomas versus benign seborrheic keratosis; and malignant melanomas versus benign nevi, were tested. The deep neural networks achieved performance on par with 21 board-certified dermatologists. Further application of quantum mechanics are solar cells, electron microscopes, and soon possibly the quantum Internet, as well as fusing quantum computers.
“Make everything as simple as possible, but not simpler”.
Albert Einstein
7. Concluding Summary
In this paper, we contrast classical correlations with quantum correlations. Numerous classical correlation models exist. By far, the most applied classical correlation model is the almost 125-year-old Karl Pearson correlation model due to its simplicity and intuition. The simplicity comes at the cost of numerous limitations, foremost the linearity restriction. However, other classical correlation models are applied in practice such as copula correlations, which allow the reduction of complexity and can correlate an arbitrary number of variables (such as numerous loans of a commercial bank). Stochastic correlation models have gained popularity in the recent past since they are mathematically rigorous and can replicate uncertain correlations in practice well.
Quantum correlation (or quantum entanglement) is the mysterious property that two paired particles can communicate instantaneously even if separated by large distances. Once measured, quantum entanglement disappears. This property was famously called “Spooky action at a distance” by Albert Einstein.
We contrast classical with quantum correlations in three environments.
1) In the CHSH environment, classical correlations are identical with quantum correlations for trivial angle combinations. However, for all other combinations, quantum correlations are stronger than classical correlations. This is true for positive correlations as well as negative correlations.
2) Contrasting static correlation matrices, we find that the traces of the matrices are dissimilar. For classical correlations, the trace consists of unity entries, so the trace is simply n, where n is the number of involved variables. The terms on the diagonal of the quantum matrices are the population terms, giving us the probabilities of being in state
or
. Hence there are no similarities between the traces of classical and quantum density matrices.
With respect to off-diagonal elements, for a classical matrix, the higher the off-diagonal elements, the higher the correlation between the variables. For quantum correlations, the off-diagonal elements of density matrices are coherence terms. The concept of coherence is closely related to entanglement. Coherent waves display the properties of correlation between waves or wave packets, i.e., show well-defined constant phase relationships. Hence, while higher off-diagonal terms of the classical correlation matrix represent higher correlation between the variables, higher the off-diagonal entries of quantum correlation matrices represent higher association between the relative phase of quantum systems.
3) Classical dynamic (i.e., time evolving) correlation matrices typically involve numerous parameters and can, therefore, typically replicate reality, for example the higher correlation in a recession, or the negative price-volatility relationship well.
In classical physics, the time evolution of a system is governed by Newton’s second law of motion. In quantum mechanics, the conceptual analogue of Newton’s second law of motion is the famous Schrödinger equation, which is deterministic until measured. From the Schrödinger equation, the von-Neumann equation can be derived, which describes the time evolution of a quantum density matrix.
The diagonal elements of a classical correlation matrix represent the variance of the variable. The diagonal elements of the dynamic density matrix in the Heisenberg picture (matrix 38) are naturally zero, since there is no change in correlation between identical eigen-energies. The higher the off-diagonal elements of a classical matrix, the higher the correlation between the variables (matrix 23). The off-diagonal elements of the dynamic density matrix in the Heisenberg picture (matrix 38) imply rapid interstate transitions between the energy states.
Regarding opportunities, the application of classical statistical analyses in quantum mechanics is nonsensible, since quantum mechanics follows its own physical laws. However, vice versa, numerous applications of quantum mechanics properties in the macro static world are currently applied, in particular quantum computing, cryptography, metrology, teleportation, the quantum Internet, laser technology, medical imaging and more.
Appendix
Proof that the Schrödinger picture, the Heisenberg picture, and the Interaction picture result in the same time evolution process.
In the Schrödinger picture the state is time dependent ΨS(t), but the operator is not,
.
In the Heisenberg picture the operator is time independent
, but state is ΨH is not.
In the Interaction picture, both the state ΨI(t) and the operator are time dependent
.
We will show that for all pictures, the operator expectation value of the state Ψ at time t0 is
(39)
Let’s start with the Schrödinger picture.
States become time dependent by multiplying with the time evolution operator U,
where
, and the Hamiltonian H is split into a time independent H0 and a time dependent potential V(t),
. Since the operator is constant,
, the time dependent expectation value is
For a constant operator, t = t0, hence U = 1, and we also have
.
For the Heisenberg picture the time evolution of the states is constant. So we have
. However, the operator is time dependent, hence multiplied with the time operator,
. Therefor the time evolution of the operator is
.
The Interaction picture, also termed Dirac picture, is the most general representation of a time evolution process. From the Hamiltonian
, we remove the time independent H0, so the time process is only governed by V(t),
. For the operator we only model an H0 time evolution,
. So for the expectation value, we have
. Since
, we have again
. Q.E.D.
NOTES
1Sadly, Fisher Black passed away in 1995. He would have undoubtably co-received the Nobel Prize.