Interval Based Analysis of Bell’s Theorem

This paper introduces the concept and motivates the use of finite-interval based measures for physically realizable and measurable quantities, which we call  -measures. We demonstrate the utility and power of  -measures by illustrating their use in an interval-based analysis of a prototypical Bell’s inequality in the measurement of the polarization states of an entangled pair of photons. We show that the use of finite intervals in place of real-numbered values in the Bell inequality leads to reduced violations. We demonstrate that, under some conditions, an interval-based but otherwise classically calculated probability measure can be made to arbitrarily closely approximate its quantal counterpart. More generally, we claim by heuristic arguments and by formal analogy with finite-state machines that  -measures can provide a more accurate model of both classical and quantal physical property values than point-like, real numbers—as originally proposed by Tuero Sunaga in 1958.


Introduction
We present first two heuristic arguments, followed by theoretical and numerical demonstrations, to support motivation of a concept to replace point-like real-numbered physical property values with intervals of value we call  -measures, which may be weighted by some function. The exact nature of the weighting is not crucial, however, to the interval-based representation. In Sect.
1.1, we show how these arguments suggest that the conventionally assumed assignment of real numbers to represent physical property values may not be tenable (see, e.g., [1]), and instead  -measures can provide more accurate models of manifest physical reality. In Sect. 1.2, we introduce the concept of finite intervals as defined and practiced in computing theory. In Sect. 2, we apply the  -measure concept to a new analysis of Bell's theorem using well established interval-analysis theorems to show that violations of the classically derived Bell's inequality may thereby be reduced to expectations arbitrarily approaching their quantum prediction counterparts that are consistent with results of Bell tests. In Sect. 3, we offer concluding, general remarks highlighting how the use of finite intervals to represent physically measurable quantities may have significant impact on analysis of physical systems, both classical and quantal, and how, in particular, the new results derived from interval-based analysis may also impact technologies based on them.

 -Measure Description of a Physically Measurable Quantity
Measurement of any physical property value and generic manifestation of any physical property value are equivalent processes at some fundamental level. This equivalence is foundational to environmental decoherence theory since certain manifestations of value are physically realized 1 via "implicit measurement" of objects by the environment in which they exist [2]. This is precisely the essence of the equivalence claim, that object properties become physically manifest by unavoidable implicit measurement resulting from any and every interaction.
This means that certain attributes of any process to measure physical values are common attributes of any generic process of manifestation of physical values.
For example, all physically realizable measurements are performed using resolution-limited devices and processes, so generic manifestation of physical values are equally resolution-limited. Therefore, no physically realizable measurement and no physically realizable manifestation of any physical property value are expected to be represented by a single, point-like, real number. Such an assignment would require realization of infinite (physical) resolution. Infinite resolution is clearly untenable, and hence, a non-physical abstraction. This suggests that any assignment that relies on a finite resolution can only be manifest as finite intervals of values, i.e., "  -measures".
From a communication theoretic point of view, suppose a communication signal could have a producible and detectable parameter represented by a real number. Since real numbers are infinitely precise and can be represented mathematically [1] only by an infinite number of digits, such a signal would contain an infinite amount of information. Conveyance of this signal from one point to another would constitute an infinite change of entropy, or an exchange of infinite information [3], in a finite time through a finite spectral width channel, which is not physically possible, even if the channel were noise-free. Therefore 1 We use the term "physically realized", while admittedly not rigidly definable, because it offers a working definition of the notion of a physically realized entity as one that can exist in and have influence on physical reality, while having physical properties that are, in principle, measurable by a physical device. It is to be contrasted with an abstracted physical property, which may be formally useful but may not be measurable by a physical device. the signal parameter cannot be validly represented by a single real number. A  -measured parameter, on the other hand, has finite precision and finite information content, requiring a finite spectral width and finite time to convey. Further, the physics principle known as the Bekenstein bound [4] dictates that infinite entropy, or information, cannot exist in a finite region of space with finite energy, which can be interpreted as precluding both production and detection of any signal with a real-numbered parameter. It is interesting to note that  -measures are "naturally" endowed with interval entropy and related information content [5].
Using these and other similar arguments, we assert that realizable and measurable property values are more accurately modeled by  -measures than by point-like real-numbered values of zero measure. We further assert that  -measures apply to both classical and quantal physically realizable values. At some level, the interval (  -measure) model is in conflict with the convention of classical physics to assume measurable property values are mathematically represented by real numbers; this conventional representation may be too restrictive.
The conflict is perhaps less pronounced for quantal measurement outcomes due to the intrinsic uncertainties and ambiguities in a quantal description, but there is a critical difference in  -measured quantal superposition and conventional quantal superposition:  -measured outcome values, even when weighted by some appropriate function, are not envisaged to be associated with a probability metric across eigenvalues. Because  -measure intervals apply to each single measurement, or manifestation of value, the eigenvalues within an interval are assumed associated with a non-statistical ontic metric. While the exact definition and meaning of this ontic metric is not yet clear (and the subject of a follow-on paper), the assertion that it is non-statistical means that single measurement outcomes have distributed value, i.e., they are  -measures. This interval-based representation suggests that all realizable quantum states that result from measurement are comprised of simultaneously physically existing eigenstates. Every physically realizable quantum state is a superposition of multiple states in every realizable basis, i.e., a basis with physically measurable eigenvalues. A  -measured state cannot be represented by a single, real-numbered direction in an abstract space of realizable eigenvalues.  -measured quantum state definitions open the opportunity to form an entropy metric calculated just like Shannon information entropy [2] is calculated from a symbol alphabet probability density function, i.e., where ( ) f x is defined as the modulus squared of a state vector as a function of x eigenvalues. A critical difference in a  -measured entropy, however, is that the function value is an ontic, or physical, metric as opposed to an epistemic, or informational/probability, metric. This is because the eigenstates of a  -measured state are treated as simultaneously physically existing eigenstates in superposition, yet the entropy of the state can never be zero [4] in any realizable basis DOI: 10.4236/jmp.2019.106041 588 Journal of Modern Physics since this would require a single real-numbered eigenvalue, a non-realizable entity in the  -measure concept (see, e.g. [6]).
Application of  -measures to physical values is analogous to the application of intervals to the values typical of finite-state machines, which are incapable of specifying or processing real-numbered values. The application of interval analysis herein to all physically realizable property values is likewise suggested for fundamentally similar reasons. Physical objects, systems of objects, and processes, such as classical and quantum measurement, are limited in their ability to manifest real-numbered property values by parameters such as spectral limits, process and time limits, and various other constraints. Both classical and quantal physically realizable objects and systems thus can be viewed in some sense as finite-state machines.

 -Measures Represented by Finite Intervals
The mathematical formalism of interval analysis was developed and has seen its primary application in computing theory for numerical analysis and mathematical modeling. It is a relatively recent cross-disciplinary field pioneered by M.
Warmus [7], T. Sunaga [8], R. Moore [9], and U. Kulisch [10]. (For these and other early contributions, see [11]). According to [11], it was Sunaga [8] who first foresaw the fundamental connection between the mathematical concept of an interval and its applications to real systems and applied analysis. Applications to the physical sciences have thus far been extremely limited, however, to studies of formal systems through the "intervalization" of their representative differential or algebraic equations [12] [13].
The need for the concept of an interval was spawned by the need in the above numerical applications to enclose a real number when it can be specified only with limited accuracy, i.e., it cannot be exactly represented on any finite-precision machine. In physical systems, inaccuracy in measurement coupled with known or unknown uncertainty and variability in physical parameters, initial and boundary conditions, etc., formally inhibit the manifestation of measurable quantities as real numbers, to be treated via the machineries of real numbers' arithmetic and algebra. Special axioms and special interval arithmetic and algebra were clearly needed to endow the new field with rigorous mathematical foundations.
In numerical analysis, finite intervals of one or more dimensions are seen as extensions of real (or complex) numbers. As mathematical objects, intervals in themselves do not form proper vector spaces [14] [15]. Interval arithmetic and interval algebra have nonetheless been developed by abstracting their real-numbered counterparts, based primarily on set theory and algebraic geometry [16]. However, compared to real-number objects, intervals have "extended" properties. As we demonstrate below, these properties provide for a powerful analytical tool in the description and/or analysis of real physical systems when property values are represented by  -measures.

Application to Bell's Theorem
In Sect. 2.1, we demonstrate how, under some conditions, an intervalized but otherwise classically calculated correlation function can be made to arbitrarily come close to its quantal counterpart. The demonstration is essentially a re-casting using intervals and interval analysis of a limiting case derived by Bell [17]. Using proxies of the interval-valued correlation functions, and using two basic theorems of interval analysis suggest that, under some conditions, the two can be made to come arbitrarily close to each other. In Sect. 2.2, we test this assertion by applying it to a prototypical measurement of the polarization states of an entangled pair of photons.

Theoretical Illustration
We demonstrate in this section the validity of the following assertion: Expressed as interval-valued functionals, as opposed to real number-valued functions, the distance between a classically calculated correlation function, of two measured interval quantities, and its quantal counterpart can be shown (under some conditions) to arbitrarily approach zero. 2 Let the two measured interval-variables be where we assume that both are one-dimensional intervals (generalization to higher dimensions may not be trivial, see, e.g., [18]). We denote their real-numbered values, i.e., their degenerate values, as x and y, and the unit vectors along their directions by x and ŷ . By definition, a classically calculated correlation function of X and Y, x y λ , will always involve a weighted sum over the parameter λ of their inner product. For the sake of this demonstration we do not distinguish between a Riemann and a Lebesque integration; we only assume the existence of an integrable real-valued function or functional. Its quantal counterpart, assuming an entangled single state, is a dot product (in the same metric space), which can be expressed as ˆx y − ⋅ . For real-valued inner and dot products, it has been shown [17], Equation (18), that where  is a small number but which cannot be made arbitrarily small, i.e., will always be bounded from below due to the finite precision of any physical measurement. Our demonstration of the assertion made above is essentially a recast of Equation (1) in its interval analog for intervals X and Y, but in which the analog of  is shown that it can be made arbitrarily small. The conditions pertain to our assumed low dimensionality of the intervals and of the unit vectors, in addition to the assumed forms of the inner and dot products, our proxy correlation functions.
In lieu of the inner product, we will have an interval-valued integral function, or a functional, and in lieu of the dot product for unit vectors, an assumed interval-valued functional related to the range of the first. The interval analog of where lower and upper refer to the lower and upper Darboux integrals [13]. It is Z that includes Z, and over which interval the derivative of F exists and does not contain zero. These general properties [13] follow since F is assumed to be an "extension" of the real-valued function f, i.e., the integral function that gives where ( ) d ⋅ denotes the diameter (or width) of its interval argument.
A fundamental property of any extended function, F, of an interval is its "enclosure" property, i.e., is the range of F over the interval Z. Again, this relation is not unique for ˆx y ⋅ .
Next, we take advantage of two basic theorems of interval analysis [9] [12].
The first concerns the distance between two intervals, also referred to as the Clearly, this is only true under the conditions (i.e., low dimensionality of the intervals and the unit vectors) and assumptions made (i.e., the assumed specific forms of the inner and dot products). Applications to different forms and/or any generalization are clearly beyond the scope of the assertion.

Numerical Illustration
Our first application of the  -measure concept is to an interval-based analysis of a prototypical [19] [20] form of Bell's inequality [17] [21]. The quantum-mechanical probability for a measurement of the polarization states of an entangled pair of photons can be shown to be proportional to the cosine (or sine) squared of the measured polarization angles [21]. (See Appendix for an illustrated structure of a prototypical Bell test using the entangled spin states case, which is, in essence, the same as the polarization states but easier to illustrate.) If To intervalize Equation (13), we re-express the measured angles, 1 2 , θ θ , and  (2) is retained when expressed as Intervalized, Equation (14) suggests that the interval Note that the sine of an interval is also an interval since the sine function will map every point in the interval argument to a point in the interval image of the function.
The enclosure property, Equation (7), can be used, as an example, to show Another intriguing property of interval functionals is their dependence on the algebraic form or structure of the enclosing function f, or its extended pair F.
"Inequality violation" of Equation (14) is when the left hand side of the equation minus the right hand side becomes negative. This is indeed seen for the quantum-mechanically calculated probabilities at various angles and over extended domains of none-zero measures (see Figure 1). For our demonstration, however, all we need is to choose carefully a small set of angles (or even a single set of angles) at which Equation (13)    This is due both to the structure of Equation (14) and the size of the error in the measured angles, i.e., not just the presence of the error itself. For this particular photon polarization-angle example, Figure 2 suggests that uncertainties in the measured angles need to be less than 0.05 deg to differentiate clear violation from no violation.
As mentioned above, the calculated no-violation probabilities using interval-based quantities appear to depend on the algebraic structure of the inequality itself. A critical parameter in the interval estimation for the probabilities is the Hausdorff distance, Equation (11). In Figure 3, we show the dependence of this distance on the number of subdivisions needed for the proxy classical correlation interval to enclose the proxy quantal correlation interval. The distance is  normalized to the diameter of the interval at each k, such that a distance of unity is the smallest possible distance. Here, 16 k = seems to give a rapid (but not necessarily too rapid) of a convergence, almost in an exponential rather than a geometric fashion. This feature may be important in designing Bell tests optimized for error constraints and the algebraic form or structure of the inequality.
Rapid convergence (the "right" form of the inequality) can compensate for the size of the measurement error. In this particular illustration, however, given the exponential convergence, the form of the inequality, Equation (14), seems less of

Discussion
We have introduced and motivated the use of finite intervals to represent physically measurable quantities, we call  -measures, in place of the real-numbered representation, which we consider untenable. We demonstrate the utility of  -measures using theoretical and numerical illustrations. Our theoretical demonstration, an interval-based recasting of Bell's inequality using proxy correlation functionals, shows that, under some conditions, two measured interval quanti-

Appendix: The Structure of a Bell Test
In 1964, Irish physicist John S. Bell proposed a revolutionary theorem that could possibly prove the existence of quantal correlations of entangled objects. His theorem showed that violations of a classical probability inequality could be tested so as to prove classical correlations of detected particles cannot be made arbitrarily close to quantal correlations (see, e.g., [29] [30], and references therein). We illustrate the Bell theorem and tests with an example Bell test structure with these key elements (see Figure 4): 1) A source of twin photons, P1 and P2, entangled with the same quantum spin state. 2) A set of two detectors, D1 and D2, one for each of the entangled pair. 3) An adjustable relative angle, θ , between the two detectors, along with relationships for the classical correlation function to the relative detector angle and for the quantal correlation function ( Figure 5).
If quantal correlations are as predicted by the theory, Bell test data show a cosine squared-relationship of correlation with respect to the relative detector angle (the red curve in Figure 5). If classical correlations are correct, on the other hand, the relationship will be linear (the blue curve in Figure 5). Figure 6 and  Figure 6 shows where these detectors will agree, i.e., be correlated, while the red arc shows where they will disagree. Clearly, as θ increases linearly, the green arc will diminish linearly and the red arc will increase linearly. This shows the relationship of correlation to relative detector angle to be linear for the classical case. Linearity can also be appreciated to stem from the assumed uniform distribution of a random θ .
The quantal case is very different, as illustrated by Figure 7. Quantum theory dictates that when D1 detects P1 spin, for example, spin up, the quantum spin state of P2 must assume the same spin angle, i.e., spin up. So P2 must strike D2 with the D1 detected angle of P1. But since D2 is at a relative angle of θ with D1, the P2 quantum spin state must be projected onto D2, i.e., multiplied by the    cosθ . Since quantum probability is the square of the state amplitude, the multiplier becomes 2 cos θ . This means the probability of a D1 detection being the same as a D2 detection, i.e., the probability of agreement, or correlation, is a function of 2 cos θ . So, if quantum predictions are correct, Bell test data will reproduce the red curve in Figure 5 for many measurements of random spin and random detector angles. If classical predictions are correct, the blue curve will be reproduced.
Many actual Bell tests consistently have reproduced the quantum prediction.
However, there is a critical built-in assumption for the classical case and the Bell inequality: that property values, such as spin or polarization, are real-numbered values.
But, as we have argued in this paper, if one replaces real-numbered values with "quasi classical" interval values, or  -measures, the differences between the two results may not be as pronounced or as differentiated, at least under some conditions (see Figure 8). One obvious result of this finding is that the validity of using conventional Bell tests to demonstrate quantum correlations may be less compelling when using  -measures than a real-number representation. Figure 7. Quantum correlation as a function of 2 cos θ . P2 assumes the direction of the P1 state detected by D1, e.g., spin up. This state is then projected onto the D2 up direction. The projected amplitude is squared so as to get the absolute probability.