Realistic Simulations of EPR Experiments: Nonlinear Concept for New Principle of Non-Locality and for Realistic Interpretation of Quantum Mechanics

Abstract

This paper demonstrates the capability to simulate EPR experiments in a realistic way (i.e. without transmission of information between the source and the detector) and confirms even more the Bell’s theorem, by demonstrating that effectively non-local variables are required to violate the Bell’s inequality. Unlike all EPR experiments, this modeling instead of simply noting the inadequacy of a local modeling proposes a new way of non-local idealization. These simulations work on spreadsheet by idealizing objects with a non-linear notion, the extended object. This allows to set up a new principle of locality (or non-locality) and a new meaning on the loopholes of EPR experiments. It sheds new light on several concepts of Quantum Mechanics (QM): the quantum states (would represent an equivalence class of many different indistinguishable objects seen as equivalent in terms of measurement), the wave function (would represent a probability on this equivalence class seen as a single set despite different probabilities of happening for the individual objects composing this equivalence class), the measurement (in our simulation, the measured object is not modified), the superposition of states (in our simulation, Schr?dinger’s cat is never both dead and alive). These simulations lead to several kinds of EPR experiments with violations of the Bell’s inequalities with values different from the QM, allow to realize entanglements with more than 2 objects (whatever the number of Alice and Bob) or even with more than 2 results (beyond the only “+1” or “?1” outputs). Many questions remain to be explored in the physics domain on quantum phenomena (teleportation, encryption, quantum computing, etc.), but this theoretical approach also reveals the necessity of developing new non-linear tools in mathematics domain and seems to show that QM could finally be a limit case of a more fundamental nonlinear physics theory founded on these extended objects. This observation is strangely reminiscent of developments such as string theory.

Share and Cite:

Le Corre, S. (2022) Realistic Simulations of EPR Experiments: Nonlinear Concept for New Principle of Non-Locality and for Realistic Interpretation of Quantum Mechanics. Open Access Library Journal, 9, 1-33. doi: 10.4236/oalib.1108204.

1. Introduction

Quantum mechanics (QM) is a theory that works mathematically exceptionally well but which poses problems of physical interpretation [1]. Thus, it is for example difficult to provide a physical interpretation to the wave function as well as to the non-locality of quantum mechanics demonstrated in the EPR (Einstein-Podolsky-Rosen) experiments [2] (otherwise an instantaneous transmission of information or at least greater than the speed of light would be necessary). In this article, we demonstrate, through numerical simulations, the possibility of violating Bell’s inequalities for the EPR experiments in a realistic way, without local hidden variables as expected by the Bell’s theorem, but with a new concept of idealization (extended object leading to a new principle of locality or of non-locality). This solution will make it possible to understand the violation of these inequalities, to reinterpret the experimental results of the EPR experiments (in particular the loopholes will take on a new meaning), to provide a new perspective on the non-locality of quantum mechanics (without having recourse to superluminal information transmissions) and more generally to initiate a new point of view on the interpretation of the QM (starting with the eigenvectors and eigenvalues of observables).

2. EPR Experiment

2.1. Description of the Experiment

The simulations (of these digital EPR experiments) presented in this article allow obtaining a systematic violation of Bell’s inequalities, meaning that these simulations verify an idealization of a non-local reality. Unlike the theories with hidden variables, conventionally designed not to violate these inequalities (i.e. not to exceed a value of 2), in our model, we will only be able to have values greater than 2 (the violation predicted by the QM being of about 2.83). Figure 1 is the schematic diagram that corresponds to the ideal EPR experiment that we will model and simulate:

2.2. Description of the Source Objects

In the context of our simulation, the source will correspond to the random generation not of one direction but of a couple of arbitrary directions. Each direction will be characterized by an angle between 0˚ and 180˚. We will define all these angles with respect to an absolute frame of reference. The length of the segment carrying this direction is not important in our experience. We can assume it is equal to 1. We can thus represent our source objects in the form of crosses with branches that are more or less spaced (cf. Figure 2), the 2 branches corresponding to diameters are the couple of directions.

1st important remark: The fact that the source provides 2 characteristics for the same source object is the 1st important point of this idealization. As we will see later, it is certainly the keystone of this modeling class.

2.3. Description of the Detector

A detector will symbolically consist of 1 circle divided into 4 sectors of 90˚ each (see Figure 3) defined on a frame of reference fixed on this circle. The 4 sectors in this frame of reference are always between 0˚ and 90˚ for the 1st sector, 90˚

Figure 1. Schematic diagram of the ideal EPR experiment modeled in this study.

Figure 2. Source objects. Two examples of source objects (a) and (c) with their corresponding explicit idealization (b) and (d). The indicated angles correspond to the angles of the 2 directions with respect to the absolute reference frame.

Figure 3. Detectors. Division into 4 sectors which can give the detection values “+1” or “−1”. The examples of angles indicated (22.5˚, 45˚ and 67.5˚) correspond to the angles of the detector with respect to the absolute reference frame.

and 180 for the 2nd, 180˚ and 270˚ for the 3rd and 270˚ and 360˚ for the 4th. We assign a value +1 to the 1st and 3rd sectors and a value −1 to the 2nd and 4th sectors. This value will be used to define the output value of the detector. The orientation of the detector will be defined in the absolute frame of reference mentioned previously (i.e. the same as for the source objects) by the angle of its axis noted “0˚” in Figure 3. The detector will analyze the 2 directions of the source object by applying the following rule:

If the 2 directions of the source object are in one or two sectors of the same value, the detector outputs the value of the sector(s).

2nd Important remark: This rule allows any orientation of the detector to potentially provide a value for any source object. On the other hand, this also means that for a fixed orientation of the detector, certain source objects may not provide a value (those whose directions are in 2 sectors of different values) and therefore seen as not detected. This is the 2nd crucial point of our modeling (direct consequence of the previous “1st Important remark”).

Because non-detections are certainly the most delicate point to accept at first glance, we will devote a dedicated section to them in which we will show that they:

・ Are impossible to avoid.

・ Cannot correspond to a 3rd value (which would complete the possible outputs “+1” and “−1”).

・ Are of a different kind than the loopholes of real EPR experiments.

The 2 previous important remarks will be the fundamental characteristics of the modeling class represented by the model that we are going to study. But let’s continue to describe our modeling.

2.4. Numerical Modeling of the Source Objects

Our source objects are defined by a couple of direction. We will determine the 1st direction of the couple by randomly choosing a 1st angle between 0˚ and 180˚ with respect to the absolute frame of reference. We will then randomly choose a second angle between 0˚ and 180˚ to determine the deviation from the 2nd direction to the 1st direction (what is called spacing in the following). This second angle will therefore be added to the angle of the 1st direction to define the 2nd direction. We could very well have directly defined the 2nd angle in the same way as the 2nd direction (and that would not have changed anything to the results presented in this article). We define the 2nd element of the couple rather as the spacing to the 1st direction because our experiences will show that this spacing is a structuring parameter of our modeling. Indeed, the different values of violation of the Bell inequalities will depend on this parameter. This representation of the couple will therefore be more relevant.

Therefore, there are two ways to introduce this angular spacing as a parameter of our idealization (named SpacMax). First of all, the value of SpacMax is defined and fixed for all the trials of the experiment and is chosen between 0˚ and 180˚. It is this SpacMax parameter that will be structuring to obtain a variety of Bell inequality violation values. But it can be used either as a fixed spacing (at each trial, the spacing would be the same and would be SpacMax) or as a maximum angle of spacing (at each trial, the spacing would be randomly chosen in the interval [0˚; SpacMax] rather than [0˚; 180˚], it would then change at each trial). In our study, we mainly focus on the choice of the maximum angle of spacing. Once again, we could very well define a fixed spacing for the whole experiment, meaning that each source object would have 2 directions with always the same spacing and only the direction of this couple that would change at the source. We would then systematically obtain the violation of Bell’s inequalities again. We don’t make this choice for this modeling because, as we will see it hereafter, even if one obtains the expected QM results, this idealization (with fixed spacing) gives curves which appear slightly worse than those obtained with maximum spacing.

Let’s start by checking the expected statistics of basic QM experiments.

2.5. Experiment with 1 Detector Whose Orientation is Random for Each Trial (See File “01-ONE randomly oriented detector.xlsx”):

Concretely, Table 1 presents the implementation of the orientation of the detector (column D) and the definition of the source objects, 1st direction (column E) and spacing (column F) in a spreadsheet. Line 1 gives the formulas used:

The role of the modulo (in the definition of the 2nd direction, column F) is to provide a definition of the couple in the upper semicircle. When the angle defining the 2nd direction (sum of the angle defining the position of the 1st direction and the spacing) exceeds 180˚, the 2nd direction (red line before modulo in Figure 4) is then

Table 1. Description of the implementation’s elements of the source objects.

Figure 4. The source object defined by (145˚; 145˚ + 50˚ = 195˚) is redefined by (145˚; 15˚).

defined by its semi-line (blue line after modulo) contained in the upper semi-circle (Figure 4).

Thus any couple has 2 ways of defining itself, for example in the case of Figure 4, (1st dir = 145˚; Spacing = 50˚) which will be translated into (1st dir = 145˚; 2nd dir = 15˚) or (1st dir = 15˚; Spacing = 130˚) which will be translated into (1st dir = 15˚; 2nd dir = 145˚). This “duplication” has no impact on statistics because it is shared by all source objects.

To determine the outputs of the detector (Table 2), we convert the 2 directions of the source object in the detector frame of reference (column G and column H) by removing the orientation of the detector (the 0˚ axis of Figure 3). Then, we look in which sectors these 2 directions are to obtain the result “+1” or “−1” or no result (column I) which corresponds to the rule mentioned previously. The implementation used is shown in line 1:

To analyze the results of the detector (Table 3), we separately count the “+1” (column J and O), the “−1” (column K and P) and the number of detected events, i.e. giving a result (column N). For information, we also count the number of trials (column L) and the number of undetected events (column M). Here is the implementation in line 1:

For a random choice of the orientation of the detector for each trial, we always obtain a probability of 0.5 to have “+1” and 0.5 to have “−1” and whatever the value of spacing (SpacMax) of the directions (Table 4) in agreement with the expected results of the QM:

2.6. Experiment with 1 Detector Whose Orientation is Fixed for Each Trial (See File “02-ONE detector oriented in ONE direction.xlsx”):

We obtain the same results if the orientation of the detector is fixed for all the

Table 2. Description of the implementation’s elements of the detector.

Table 3. Description of the implementation’s elements of the results’ analysis.

Table 4. Results of an experiment with 1 detector whose orientation is random for each trial.

trials (Table 5) in agreement with the expected results of the QM:

The advantage of a numerical simulation of an EPR experiment is that one can study undetected events. In Figure 5, the number of undetected and detected events has been indicated for comparison. Note that as long as the spacing is not zero, some non-detections are always obtained. As we will discuss later, these non-detections are not of the same nature as the loopholes in the experimental results:

2.7. Experience with 2 Detectors, the First with a Fixed Orientation and the Second Whose Orientation Varies (See File “03-TWO detectors oriented in TWO directions.xlsx”):

Another advantage of a digital simulation is that the detected objects are neither destroyed nor modified. We can therefore analyze them again after a first detection. In Figure 6, we first passed the source objects through a 1st detector and then we passed through a 2nd detector only those which had given the result “+1”. This same set of source object giving “+1” was redirected to several 2nd

Table 5. Results of an experiment with 1 detector whose orientation is fixed for each trial.

Figure 5. Comparison of the cardinality of the sets of detected and non-detected events depending on the maximum spacing.

Figure 6. Probabilities obtained with our simulation on a detector, from a source set previously filtered on the value “+1” (after passing through a 1st detector), compared to theory (gray curve).

detectors to analyze this set according to the orientation of a detector. It is as if we were multiplying the detected “photon” (in our case, 19 times) in its initial state so that this same “photon” is studied on several detectors (in our case, 19 second detectors) of different orientation in parallel (19 detectors, each one with an angle of orientation of 0˚, 10˚, 20˚…, 180˚):

The gray curve represents the “ cos 2 α ” curve of the orientation angle α of the detector expected by QM. It is superimposed very well on the probability distribution of measuring “+1” for our experiment.

2.8. 1st EPR Experiment (cf. File “04-Entanglement E(a,b).xlsx”)

Let’s check that our model satisfies the expected statistics of an EPR experiment for E ( a , b ) = P + + ( a , b ) + P ( a , b ) P + ( a , b ) P + ( a , b ) . For this, we take our experiment on one detector and we send the same source object on a 2nd detector (in parallel) oriented independently from the 1st one. In a theoretical point of view, the entanglement is thus modeled by a linear combination of an identically duplicated source, i.e. by the quantum states traditionally noted | and | . The correlation analysis is carried out on the results of the 2 detectors by counting the results {+1; +1}, {−1; −1}, {+1; −1} and {−1; +1} (column Z, AA, AB and AC in Table 6). Their probabilities (column AJ, AK, AL and AM) are obtained on the base of detected events (column AI). For information, we also count the undetected events (column AF and AG):

Here are the results obtained for a maximum spacing value of 30˚ for each source object (each couple of directions). The green curve is the theoretical curve expected by quantum mechanics, E ( a , b ) = cos ( 2 θ ) , with θ the difference in orientation of the 2 detectors (Figure 7).

We will note the very good agreement between our model and the theory, the green (theoretical) and blue (measured) curves are indistinguishable. This is the reason for our choice of maximum spacing of 30˚ for the couple of directions in Figure 7.

Our model also allows obtaining violations of Bell’s inequalities by values greater than those predicted by the QM, for other values of spacing of the source objects (Figure 8 and Figure 9):

Or a violation of Bell’s inequalities by values lower than those predicted by the QM, for other values of spacing of the source objects (Figure 10).

Finally, when we reduce our extended object (couple of non-collinear directions) to a simple vector (couple of collinear directions) by imposing a maximum

Table 6. Description of the implementation’s elements of the results’ analysis for the EPR experiments.

Figure 7. EPR probabilities of our simulation for the case of a maximum spacing of 30˚ for each source objects (situation allowing to obtain values close to the MQ).

spacing of 0˚ of the source objects, we get back the classical case which does not violate the inequalities of Bell (Figure 11).

Note: The simulation makes it easy to verify that our model is isotropic. Indeed, the same results are obtained, for the same difference of angle of the 2 detectors whatever their mutual orientation (for example, the statistics are the same for {D1 = 0˚, D2 = 20˚} or {D1 = 20˚, D2 = 40˚}).

Figure 8. EPR probabilities of our simulation for the case of a maximum spacing of 150˚ of our source objects (situation allowing to obtain values greater than the MQ).

Figure 9. EPR probabilities of our simulation for the case of a maximum spacing of 180˚ of our source objects (situation allowing to obtain values greater than the MQ).

Figure 10. EPR probabilities of our simulation for the case of a maximum spacing of 15˚ of our source objects (situation allowing to obtain values lower than the MQ).

Figure 11. EPR probabilities of our simulation for the limit case of 2 collinear directions (maximum spacing of 0˚ of our source objects) corresponding to a situation not violating Bell’s inequalities.

2.9. 2nd EPR Experiment (cf. File “05-Entanglement and violation of Bell’s inequalities.xlsx”)

Now that all the elements are in place, we can verify the violations of Bell inequalities of our model which are only the post processing of the previous data. The value expected by the QM is S M Q ( θ ) = E ( a , b ) E ( a , b ) + E ( a , b ) + E ( a , b ) = 2 2 ~ 2.83 for direction values of detector 1, a = 0 ˚ or a = 45 ˚ and direction values of detector 2, b = 22.5 ˚ or b = 67.5 ˚ . In our model, a spacing of 27.5˚ gives excellent results (Table 7):

Table 7. Violation of the Bell’s Inequality as expected in Quantum Mechanics.

But we can violate inequalities with greater values (Table 8):

Table 8. Violation of the Bell’s Inequality with a greater value than expected in Quantum Mechanics.

Or with lower values (Table 9):

Table 9. Violation of the Bell’s Inequality with a lower value than expected in Quantum Mechanics.

Our solution allows to obtain a range of violation of the Bell inequalities (roughly in the interval S M Q ( θ ) [ 2 ; 3.5 [ for the previous directions a , a , b , b ). In Figure 12, some measured values are presented as a function of the spacing

Figure 12. Some examples of values S M Q ( θ ) of violation of the Bell inequalities (different from those of the MQ) obtained by our simulation for different spacing.

of the couple of directions of the source objects. These measurements were carried out with the same values of direction of detector 1, a = 0 ˚ or a = 45 ˚ and direction of detector 2, b = 22.5 ˚ or b = 67.5 ˚ . It is therefore possible that for these directions (which correspond to the maximization within the framework of the distributions of the traditional MQ), the violation is not maximum.

Note that we also obtain the same results as those predicted by the MQ in the other case of maximization of the violation of Bell’s inequalities ( S M Q ( θ ) = 2 2 ~ 2.83 ), namely for the values of direction of detector 1, a = 0 ˚ or a = 135 ˚ and direction of detector 2, b = 67.5 ˚ or b = 202.5 ˚ .

3. Analyze

3.1. Distribution Curves of Detections and Non-Detections

An important point to begin this analyze (in particular about the non-detections set) is that all source objects are detectable, i.e. can potentially give a result. It means first that all the source objects pass through the detector and secondly that if a source object is undetected (given no result through its passage in the detector), by rotating the detector, this source object can become detected. So, by rotating the detector in all possible directions, all the source objects can be detected. This means that no source object configuration can be declared as undetectable before detection. One can also add that the source emits objects in an isotropic way, because their definition occurs randomly in all directions.

If we choose source objects with the same spacing and we look at the different possibilities of detection in a sector (Figure 13), we understand that the number of possible detected source objects for the same spacing will be lower when considering spacing close to the size of a sector (in this case 90˚) than for small spacing.

By repeating our experiment on a detector always oriented in the same way but with source objects emitted with a fixed spacing (and no longer emitted with a spacing chosen at random in an interval defined with a maximum spacing), we obtain the curves giving the number of detections of “+1”, “−1” and non-detections depending on the fixed spacing of the source objects (Figure 14).

These curves in Figure 14 quantitatively reflect the consequences of Figure 13. All these experiments for different fixed spacing were carried out with the same number of tests (5500). One verifies that the non-detection curve can also be obtained as the complement to 5500 of the sum of the “+1” and “−1” detections.

Figure 13. For spacing close to the size of a sector, there are fewer possible instances of detection (top) than for small spacing (bottom).

Figure 14. Number of measured “+1”, “−1” and undetected results on a detector oriented in only one direction as a function of the fixed value of the spacing of the source objects. 5500 trials were carried out per experiment, i.e. for each point of the curve (cf. file “06-ONE detector oriented in ONE direction -Fixed spacing.xlsx”).

We can still make a last remarkable observation on the distribution of non-detections. Only the case of source objects with zero maximum spacing makes it possible to have an empty set of non-detection. And zero maximum spacing means a couple of 2 colinear directions, i.e. a couple representing simply one vector. That is to say that it is the classic case of a local hidden variable (which doesn’t violate the inequalities of Bell). And effectively, this is the limit case of our simulation that gives S M Q ( θ ) = 2 .

As announced previously, let us show that the non-detections of the previous experiments:

・ Are impossible to avoid.

・ Cannot correspond to a 3rd value (which would complete the possible outputs “+1” and “−1”).

・ Are of a different kind than the loopholes of real EPR experiments.

3.2. Impossibility of Avoiding the Non-Detections

In the context of our experience, we notice that if the 2 directions necessary for the measurement are around 90˚, these source objects will very probably be found in the set of non-detections because each of the 2 directions will very probably be found in 2 sectors of different values. But nevertheless, there are always some configurations for which they can be detected (Figure 13). By dividing our detector into more than 4 sectors or into lower than 4 sectors one would allow some of these undetected objects being found in 2 symmetrical sectors (and becoming detected). But except if there is no division on the disk of the detector (i.e. detector then always measures the same value which is an uninteresting case), the frontier between two sectors implies systematically the existence of non-detected objects. Because one can always have a couple of directions which is on both sides of the frontier (specificity of this extended source object). Furthermore, whatever the detector’s dividing, certain directions which previously were in 2 symmetrical sectors (detected objects) will end up in the group of no detections. Finally, there is a systematic passage of elements between the groups of the undetected and of the detected. This comes from the fundamental fact that our source object is idealized by a couple of non-collinear directions. It is therefore an object irreducible to a value (irreducible to 1 point or 1 vector). One can remind that we saw previously that the limiting case of null spacing (therefore of pairs of collinear directions) gives the limiting case of non-violation of Bell’s inequalities, S M Q ( θ ) = 2 . And only this case allows a detection of 100% (with the other uninterested case of detector without division in sector). In conclusion, it is impossible to avoid the non-detections for this kind of idealization.

Remark about uncertainty principle of Heisenberg: If, on the detector, the sectors are multiplied, and therefore their size is reduced, a detection will make it possible to obtain a better precision of orientation of the measured object, but consequently many source objects will no longer be detected. It looks like the uncertainty principle of Heisenberg. What is gained on one side, is lost on the other side. It is not strictly equivalent, but it is certainly founded on the same fundamental basis (extended object that goes beyond the linear/vectoral idealization).

3.3. Non-Detections do not Correspond to a 3rd Value in Our Experience

Let’s now see in the case of a detector with 2 values what results give the detected and undetected source objects. In Figure 15, we have plotted the results of a detector as a function of the orientation and of the spacing of the source object.

If the result of the detection is “+1” (blue zone in Figure 15), nothing can be said about the spacing value (the entire interval [0˚; 180˚] is possible) but for the orientation, the interval is divided by 2 (only the interval [0˚; 90˚] is possible). With the result “−1” (rose zones in Figure 15) the other half of the orientation interval (only the interval [90˚; 180˚]) is possible. This is consistent with the fact that this experiment is a measure of the eigenvalues of the observable orientation (and not of the spacing). On the other hand, and this is the important point, the set of non-detections is spread over the entire spacing and orientation intervals, a non-detection therefore gives us no information on the orientation and then cannot be considered as a 3rd value.

About wave function collapse: Strictly speaking in our experiment there is no wave function collapse and the measured object is not modified. But in terms of probabilities, the measure reduces the possibilities (to the half of the orientation interval) and then modified the probabilities of the possible measured source objects. This reduction could partially correspond to the MQ wave function collapse, but this action of reduction doesn’t concern the physical measured object but the mathematical probabilities of the measured object.

3.4. Non-Detections of the Simulation are not the Loopholes of Real EPR Experiments

In our simulation, we completely control the undetected source objects. All

Figure 15. Representation of all the results obtained from all the possible source objects (i.e. spacing and orientation in [0˚; 180˚]).

emitted objects pass through the detector. And we know why they do not have measurement data, because the directions of the couple are in two neighborhood sectors. Whereas in EPR experiments we have much less control on what is emitted. The lack of detection in real experiments is generally explained by imperfection in the detection or loss of emitted objects. These losses could then be source objects that could have been measured (not belonging to the set of non-detections). It is these lost detections that could lead to changes in the statistics, what are called loopholes [3]. The non-detections in our simulation are of a different nature because numerically there is no loss (100% of the source objects pass through the detector). And moreover, these non-detections are unavoidable part of our digital experiment. But conversely, this study teaches us that, if our solution is representative of quantum mechanical experiments, the non-detection of real EPR experiments are not exclusively losses but also unavoidable non-detections (objects passing through the detectors but giving no result).

Remark: The existence of this set of non-detections leads to the possible responses to a measurement that are no longer simply binary but “ternary” with indeterminacy/non-measurability as a third possible response. The quotes mean that it is a 3rd element of a different nature from the 2 other values. It reminds what happens in mathematical logic for the truth values of a formula which can be true or false. There is a 3rd way, a Gödelian “loophole”, by which a formula can be undecidable. And this 3rd way is not truly a loophole or a bias in reasoning, but a foundation of logic.

3.5. Efficiency Rate and Detection Rate

Our simulations reveal then an important characteristic on the efficiency rate for the EPR experiments. In the framework of classical hidden variables model, the efficiency is expected to be able to achieve 100% and should only measure the losses of source objects. In our framework of nonlinear idealization, this efficiency rate must also be composed of analyzed source objects giving no result. Therefore, efficiency rate of real experiments is in fact the sum of a losses’ rate and non-detections’ rate. The experimental loopholes would correspond only to the losses’ rate that we haven’t in our simulation.

For our simulation, in the case of a maximum spacing of 27.5˚ (giving S M Q ( θ ) = 2 2 ~ 2.83 ), the efficiency rate of the detectors is around 85%. For weaker violation of the inequalities (maximum spacing of 5˚), it is around 95% and for a stronger violation of the inequalities (maximum spacing of 170˚), it is around 50%.

4. Discussion

4.1. State Vector

The extended source objects that pass through the detector correspond to the kets of the MQ (for example | ). But as the simulations show us, the eigenvector | doesn’t correspond to one object. It represents a set of possible source objects providing the same result, the same eigenvalue. A bit like the notion of class in mathematics, for which there are multiple indistinguishable representatives. | is at the same time one representative element of a set, but also the set itself because all elements are equivalent. The advantage is that physically the ket can represent either a set of equivalent objects (a physical beam) or one object (one physical particle). The physical counterpart is that you never know which specific representative it is, that is perhaps at the origin of the difficulties to interpret MQ. A possible mathematical representation of this modeling is the notion of interval. The basic extended objects would be the intervals ] a ; b [ which would be elements of a space which is itself a larger interval ] A ; B [ . Each interval ] a ; b [ is like the entire interval ] A ; B [ , once again a nonlinear characteristic. And as with our source objects, only the knowledge of the 2 ends of the interval is necessary to specify it, for the rest anything change, i.e. any part is the same as the whole. It finally looks like a beam of particles that we can reduce or divide to get sub-beam until obtaining “one” particle which is more one minimal beam (in this point of view), one fundamental elementary quantum, rather than actually one punctual particle.

Let’s note furthermore that mathematically this algebra of intervals is most certainly necessarily an algebra on complex intervals to be complete and consistent [4], explaining the need for the use of complex numbers.

4.2. Wave Function

The wave function could then concretely describe the effective distribution of these extended objects. But it will not be the probability of one specific object of the mathematical class but the probability of the class in its globality. And this remark is important because inside a class all the extended objects have not the same probability of happening, as we have seen it previously. So, the ket is then a class of equivalence compared to the result of a measurement but not compared to the probability of happening. It is certainly the source of the MQ entanglement. We know the probability to obtain any representative of the class but we don’t know the probability to obtain one specific representative of the class (in the MQ idealization). It is certainly also the first new component of the EPR criterium of reality that allows to violate the Bell’s inequality. One will mention a 2nd component below.

4.3. Entanglement

We have seen that the ket | can represent an interval (defined by our couple of direction). In our simulation, the ket | also represents an interval, but this time it means 2 duplicated intervals, i.e. twin intervals (the same source object is sent to the 2 detectors). The entanglement is then the sharing of 2 correlated objects in the following particular way. On the one hand, Alice and Bob share 2 known intervals, in our idealization 2 identical intervals ( | or | and only one of them at each trial). From this point of view, we are still in a classical correlation. But on the other hand, even if Alice or Bob would know which kind of object they receive (for example | and not | or vice-versa) they would be unable to know which specific interval they effectively receive because there is a large distribution of possible couples of directions (of possible intervals) that allows to obtain their specific result. That is what it transforms the correlation into an entanglement. Even if we control the relative position of each individual in the couple (in our study we chose the same position, | or | ) we cannot know the position of each individual in relation to its own detector and as soon as the orientation of the 2 detectors is different, we have different results that cannot be precisely predicted (unlike in the classical case) and that can be expressed only in terms of probability. If the source had a precise direction (a vector and not an interval), we would know for which orientation the detector would give another value. In the case of an interval, changing the orientation of the detector does not guarantee the change in the result. The entanglement comes from the fact that we know precisely the information of the global system composed of 2 individuals (the relative position of the 2 duplicated source objects for Alice and Bob, that is | or | ), but that we do not know the information carried by each individual independently of the other (the position of each duplicated source object compare to the detector’s orientation, which is different in Alice experiment or in Bob experiment).

In terms of information, one can say that the extended object makes it possible to transport information richer than that of a “punctual” hidden variable. Indeed, in a punctual idealization a detection would define only one specific value, the value of the parameter, but with extended object we define a set of possible values of the parameter. And during entanglement, this set of information is shared by 2 experiments. Alice and Bob, even if they know that they share this same set of common information, are unable to know precisely which information. It is this “invisible” correlated common information that Bell’s inequalities violations reveal.

4.4. MQ Measurement

A remarkable point is that in our numerical experiment the measured object is not modified by the detector. But extended objects could explain the quantum impression of modification of the measured object after a measurement. As the object has an orientation defined in an interval, when it passes through a detector, one can only conclude that this interval of orientation (defined by the couple of directions) is include in a larger interval defined by the sector of the detector (half of the disk). Consequently, we know that if the couple of directions (which has not been modified) passes again through another detector of the same orientation, it will be detected this time with certainty by passing through the same sectors (giving the impression of having modified the measured object), but as soon as we change the orientation of the detector (whatever small this change may be) we can no longer guarantee that the couple of directions (the interval) will be able to pass in the same sector (making the result again probabilistic). Because it may be a large spacing of the couple of directions that was close to being undetected or very thin and close to changing sectors and giving another value. This behavior is exclusively obtained because the definition of our source object is an extended object. Once again in our simulation, at no time the measured object is changed. Even if this does not mean that the measurement never changes the object being measured, it still means that idealization according to the QM would not necessarily imply a modification of the measured object.

The extended object loses the classic bijection between the measurement of a characteristic and the characterization of the measured object. In this context, to any object we can assign a precise measurement result, conversely, the measurement of a characteristic, can only reduce the possible objects (with respect to this characteristic). The measure no longer unambiguously characterizes or defines the object. Here is certainly the second new component of the EPR criterium of reality that allows to violate the Bell’s inequality. As ever seen before, this reduction could perhaps be associated with a soft version of the wave function collapse (partial reduction of the possibilities but not an action on the physical object).

The extended object (couple of directions) is defined by 2 parameters. Measurements on this extended object only make it possible to limit the possible interval of only one of the 2 characteristics defining the extended object (in this case the orientation because detection gives no information on spacing as seen previously). We can reduce the interval as much as we want (by successive measure with different orientations), but it will never be strictly reduced to a point. The interval only can tend to a point (as an asymptote) but can’t be a point. That reminds that the “quantum” term expresses the fact that infinitesimal modifications are made by steps. In a certain point of view, it means that objects for exchange cannot be reduced to a point. By this way, extended object would explain the “quantum” term of this mechanics.

4.5. Superposition of States

In our experience there is no superposition of states on the physical source object. The direction’s couple (our source object), which represents either the ket | or the ket | for Alice, is identically duplicated for Bob. We therefore have either | or | on each trial of EPR experiment but never both at the same time. Superposition is a random statistical alternation of states. An important consequence is that, in terms of a famous thought experiment, there is not therefore dead and alive Schrödinger Cat, but physically only a dead cat or an alive cat, never both.

4.6. Non-Locality

One of the major interests of EPR experiments is to challenge the notion of locality of physical reality. In our simulation, there is no information transmitted between the detectors. It then means that our idealization with extended source objects has the necessary and sufficient information (from the emission) to violate the inequalities of Bell. In particular, the only case that doesn’t violate the inequalities of Bell in our modeling is the limiting case of a null spacing (source object reduced to one direction), that is to say the only case of a non-extended source object. But as soon as the spacing of the couple of directions is not zero, the violation of the inequalities is systematic. The numerical simulation that we have just carried out makes it possible to give a new interpretation to non-locality. This is a kind of “local non-locality” or equivalently a new principle of locality (of proximity) due to the modeling of extended objects (irreducible to a point). This notion is reminiscent of the mathematical notion of neighborhood in the topological spaces.

4.7. Wave-Particle Duality

The notion of extended object could give a new approach on the wave?particle duality because an extended object a priori allows the wave propagation (because of its spatial extension). The particle aspect could be first seen as an averaged value of an extended object, but this classical point of view is certainly not exact. The particle aspect could be the expression of a particular detection, particular interaction of extended objects between themselves. In our simulation we idealize explicitly the source objects as extended objects, but even if the detector is seen as a large interval we don’t need to focus in the details on the idealization of the detector as extended objects. But it will be necessary to know how the source object interacts with the extended objects that compose the detector. When a photon is detected on a screen (like in Young’s slits experiment), we should also consider the detector as an extended object. In this case, one can imagine that if the extended source object is entirely included inside the extended detecting object, from this point of view, the source object can be detected and be seen as a whole object, a particle. If the extended object would not entirely included in the detecting object, it will be undetected. The particle aspect would be then the particular interaction when a measured extended object is entirely included in the detecting extended object. This approach means that we need to know how extended objects operate between themselves and which relations there are between these objects. In other words, we need to have a mathematical theory of non-linear [4], just like vectoral space (group, ring, field ...) in the linear domain.

4.8. Non-Linearity

A theory which is based on “non-reducible to a point” objects is most certainly a nonlinear theory (not nonlinear in its evolution equations but especially in the mathematical modeling of its basic elements, that is to say by going beyond the notion of vectors). Consequently, as our simulation is based on extended object, it is then certainly a non-linear approach. The adequacy of our solution with the results of the EPR experiment, suggests that the QM would be somehow a linear approximation of a deeper nonlinear theory. This point of view would be coherent with certain characteristics of the QM shared with the non-linear. For example, entanglement with its correlated effects regardless of the size of the system is a form of scale invariance, a characteristic found in the nonlinear domain. The difficulty in achieving high efficiency rates in QM experiments is perhaps indicative of a non-binarity of quantum reality (“+1”, “−1”, no result), again a central feature of the non-linear domain (that goes beyond the binarity of the linear idealization).

This invariance by changes of scale which characterize the nonlinear could give a way of interpretation for some experiment of MQ, like that of the experiment of Young’s slits. One can indeed imagine that diffraction could appear as a way of magnifying these extended objects. These enlarged source objects could then be large enough to actually pass through the 2 slits of Young’s experiment (one realistic way to explain the diffraction patterns by photon-by-photon passage). It should be noted that this possibility of magnification would be a fundamental characteristic specific to extended objects (i.e. specific of the non-linear domain) because linear modeling cannot (a point always remaining a point after magnification).

4.9. Interpretation of Extended Object

One can seek to go further in the comprehension of these extended objects to seek the physical source of this definition on an interval. For example, it could be due to a temporal variation, an agitation, of a vector in an interval, but this image is still classical, because at each passage through the detector, we would only have one vector as in a classic experiment because at each time there is only one direction.

A better picture could be to imagine our objects as an agglomeration of identical sub-objects on several scales (a scale invariance on several levels). Once again, like a beam that can be “cut” and which again forms a beam (but less intense), until reaching a minimal agglomerate (below which the composition is no longer maintained, like the quarks which can only exist in a group). Each sub-object would have a slightly different direction from each other. And this multiplicity would define an interval. It certainly means that this assembly of particles are not independent each other. Perhaps another telling image (more physical than the mathematical interval) could be as a part of a liquid surface (for which neighboring atoms are not free as in a gas and on which waves can propagate).

We could also imagine a 1D element a bit like in string theory. The maximum spacing of 30˚ could suggest that we are on macroscopic elements but it is the angular spacing of the vectors which would be carried, defined on this 1D physical element as small as desired (macroscopic or microscopic). The only condition is that it has to be an interval not reduced to a point. For example (Figure 16), if we assume that the vectors are perpendicular to this kind of “string” (locally

Figure 16. Possible interpretation of an extended object (red arc) as a “string” representing an arc of circle on which vectors (green lines) with a spacing of 30˚ are defined.

perpendicular at each position on the string) and that this extended object is an arc of a circle and not a segment (which from a theoretical point of view is finally the minimum step of the transition from linear to non-linear modeling), this maximum angular spacing could define the amplitude of the deformation (the hump) of the “string”, more precisely the ratio between the amplitude and the size of the “string”. One can note that in the context of our EPR experiments, what we measure is not directly the extended object but the properties defined on this extended object (green entities on Figure 16). So, in our paper (but also in MQ) this distinction is not made (it is certainly another fact that let thinking that MQ could be a limit theory of a deeper physics theory). One can also note that this description is scale invariant, whatever the size of the object (macroscopic or microscopic).

Finally, we can imagine at least 2 interpretations for our pairs of direction: the most immediate (but certainly too classical) would be to consider this source object as simply 2 directions. But we can also consider this couple as the ends of an interval (definition of an interval). In our article we have privileged this last interpretation of the source object as an interval. In addition to what we have already seen previously, another interest of the interval interpretation would be to allow a certain form of superposition at the level of the physical object (as in an experiment of the type of Young’s slits) which is a different kind of superposition than the one of states for which we would have 2 states which coexist at the same time (dead and alive cat). For Young’s slits experiment, the addition (the superposition) gives only one state at each point of space, it is a superposition which is carried out mathematically by the addition on the components of a same ket and not on incompatible (linearly independent) kets.

5. Beyond Our Simulation

5.1. Complementary and Dual Experiment

The numerical simulation makes it possible to study the non-detections. We can do this in 2 different ways. The first consists in turning the detectors from an angle other than that of the experiment in progress. By this way, some non-detected objects become detected object (giving a result), but, as we have already seen, this only passes a part of the undetected objects into the set of detected and vice versa (partial transfer between detected and undetected sets). This partial transfer certainly allows an invariance of results by rotation (by maintaining the shape of the curves of probabilities in terms of orientation). The second consists in redoing the complementary, the dual experiment by replacing the detector by a “dual” detector, always defined by 4 sectors, but whose values ”+1” and “−1” are carried by the 2 axes of separation (and not by the sectors). And the source object is replaced by a “dual” source object consisting in an hourglass (the 2 directions defining the edges of an hourglass). We talk about hourglass because we need to define an interior and an exterior between the two directions for the object source. In the experiment studied in this article, the detector has a “solid” aspect, in the dual experiment the detector has a “wire” aspect, and vice versa for the source object. The detection rule is replaced by the complementary rule:

If an hourglass cut an axis, the detector’s result is the axis’ value otherwise there is no result.

This dual/complementary experiment (from the experiment studied in this paper) will only detect the set previously qualified as undetected, but this time in its totality, and the other set (previously the detected set) will become the set of undetected, also in its totality (total transfer between detected and undetected sets). And we will get the same statistics. This characteristic reminds the transformation of a matrix into an adjunct matrix. Thus, these sets of non-detections could justify the use of a space of complex numbers, the privileged space of the QM with its notion of complementarity, duality.

Remark: In short, we have 2 forms of complementarity at 2 different scales. Within the set of detected ones, the probabilities of the 2 states noted | or | complement each other (“ cos 2 + sin 2 = 1 ”). But also, the 2 sets of detected and undetected complement each other to allow forming dual/ complementary experiences.

5.2. Experiment with Intervals of Fixed Spacing Instead of Maximum Spacing

The digital EPR experiment (performed in this article with spacing defining a maximal size for the interval of the directions of the source object) can be performed with fixed spacing source objects and Bell’s inequalities will still be violated. In this case, the fixed spacing which most closely matches the results of the QM (for the same directions of the detectors, a = 0 ˚ or a = 45 ˚ and b = 22.5 ˚ or b = 67.5 ˚ ) is a fixed spacing of around 13˚ (Table 10).

For this same fixed spacing value of 13˚, the curve of E ( a , b ) is close to that of MQ (Figure 17), but it approaches it slightly differently than the one seen in

Table 10. Bell’s violation (cf. “08-Entanglement and violation of Bell’s inequalities-Fixed spacing.xlsx”).

Figure 17. Measured EPR Probabilities and E ( a , b ) for a fixed spacing of 13˚ (cf. “07-Entanglement E(a,b)-Fixed spacing.xlsx”).

Figure 7 because it looks more like a succession of segments unlike the one obtained in Figure 7. This less accurate approximation justifies our choice to favor the maximum spacing parameter for our simulation.

Consequently, what our simulation seems to reveal is that the best approximation of the results of the QM is obtained with spacings of the two directions of our source objects which are not fixed but which are a maximum value. In real experiments, this would mean that, the characteristics of the objects in the detector were not seen with fixed size’s intervals but with intervals of different sizes.

Remark: It doesn’t imply that the source object can’t be defined by intervals with fixed spacing. Indeed, an explanation for this maximum spacing parameter seen at the detector (rather than a fixed spacing) could simply be that the interval which is a 1D object could reach the detector in different angle in 3D space. Depending on this angle, the interval would appear with more or less spacing at the detector. Another explanation could also be that the spacing depends on time, a kind of oscillation during its travel between the source and the detector, and when the object arrives at the detector, its spacing could vary. This explanation would be convenient with the interpretation of our source object as a vibrating “string”. The oscillation of the interval would be due to the oscillation of the “string”.

5.3. Experiments with Different Number of Sectors and with Source Objects Adapted to This Number of Sectors (Case of the Spin Observable)

Previously, we discussed the possibility of increasing the number of sectors in the detector but without modifying the source object. By modifying the detector and the source object in parallel to maintain a kind of invariance between the source and the detector, we can define a whole class of similar models that it would certainly be interesting to study. For example, rather than dividing the detector into 4 sectors on the 360˚ of the disc, we could divide it into only 2 sectors of 180˚ each. One of them would give the result “+1” and the other “−1”. To maintain the same “topology”, the source objects should no longer be hourglasses but simply half-hourglasses or “pieces of cake”, keeping only the upper or lower triangular part (see Figure 18). And we can continue by increasing the number of sectors of the detector with 6 sectors per disk and source objects with a shape of a trisector. We thus can define a progression of possible models:

Detector with N axis of separation giving 2.N sectors of 180˚/N and source objects with the shape of N triangles (shape of N pieces of cake) arranged regularly on the disk every 360˚/N (with N an integer).

The case N = 2 corresponds to the case treated in this article. A first quick analysis seems to indicate that the case N = 1 could be used to model experiments with source objects of kind of “spin” (i.e. kind of vector, having a characteristic of directed segment, with an orientation) and the case N = 2 (our case previously studied) to model experiments with source objects of kind of “polarization” (having a characteristic of only segment, without orientation).

Let’s look at the case N = 1 that would correspond to the measurement of the spin of a system and focus on only 2D idealization for simplicity. In Figure 19 we have represented the detector with the measurement of the observables σ z and σ x and some intervals (source object with a shape of “piece of cake”). These source objects are elements of the “equivalence class” represented by the

Figure 18. Some examples of source objects with N triangles associated to detectors with 2.N sectors.

Figure 19. The portion either yellow or orange would represent elements of the “equivalence class” | u whose exact position cannot be known by Alice or Bob within the “+1” sector (case of detection “particle per particle”).

eigenvectors of the observable σ z which are | u and | d and the eigenvectors of the observable σ x which are | l and | r :

One retrieves the perpendicularity of | u and | d : u | d = 0 by the fact that these 2 sectors do not overlap. There is the same property with the perpendicularity of | l and | r : l | r = 0 .

One can also note that between the position σ z and σ x of the detector, the sectors “+1” and “−1” overlap half. Thus, in the case of a measurement σ x when the source has been prepared in the state | u (yellow or orange portion) there is one chance on 2 that the result will be “+1” (orange portion) or “−1” (yellow portion). The fact that half of the sectors overlap explains partially the collinearity of | u and | r ( u | r = 1 / 2 ). We say partially because some non-detected objects for σ z become detected objects for σ x and inversely some detected objects for σ z become non-detectable objects for σ x , that is to say that not only detected objects (i.e. objects giving “+1” or “−1”) intervene in this change of orientation. The overlap concerns both sets of detected and undetected objects. The same thing happens for the eigenvector | d , thus it helps to

interpret the relations | r = 1 2 | u + 1 2 | d and | l = 1 2 | u 1 2 | d . One

can certainly also explain the non-commutativity of the observable σ x and σ z with these overlaps. One can also wonder if the detected objects and non-detected objects couldn’t justify the use of the complex number, as the real part and the non-real (imaginary) part.

Remark: These variants of model with a large number of sectors could allow modeling experiments with more than 2 values (not only “±1”) for the detector output (and by adapting the source objects), leading to another class of modeling of EPR experiments.

5.4. Experiment with Intervals

We have compared our couples of directions (definition of our source objects) with intervals. One can make an equivalent idealization by directly use the intervals. Let’s take for example the interval I = [ 0 ; 2 ] defining a detector for which half of this interval [ X ; X + 1 ] gives the result “+1” and the remaining half of the interval gives the result “−1”. Let’s take sub-intervals [ a ; b ] randomly defined in this interval I for the source objects. The observable is now not the orientation but the position. And instead of the rotation of the orientation of the detector (of the cross of the detector), one can translate the position X of the separation of the 2 intervals of our detector. One should obtain the same behavior than our simulation, with violation of the Bell’s inequalities and with non-detectable objects (if the interval extremities of the source object are in the 2 sectors of the detector). To be similar to our simulation, one can add that if a sector (“+1” or “−1”) is in two separated parts (not connected), the two extremities can be each one in each separated part (it can be understood in the following way, if we consider our detector in a “solid” way, the source object is considered in a “wire” way, i.e. only their extremities is analyzed, and conversely for the dual experiment). The main elements of such an experiment is summarized in Figure 20.

6. Conclusions

In this study we simulate the EPR experiments and their violation of Bell inequalities in a realistic way. Here are the main facts:

・ The physical objects are idealized in a non-linear way by the notion of extended object (not reducible to a point) defined by 2 parameters (extremities of an interval of definition).

・ There is no data transmission between the source and the detector during the

Figure 20. Main elements for an EPR experiment in which source objects would be intervals.

Bell inequality violation.

・ Our simulation tests 100% of the emitted source objects but the simulation shows that necessarily the absence of results after passing through a detector cannot be avoided if we want to obtain the violation of Bell’s inequalities (which would explain the experimental difficulties to obtain high efficiency rates).

・ The quantum state of the emitted objects is well defined and known even before the measurement. The QM would then deal not with the physical object but with the mathematical probabilities associated with the physical object.

・ The measured object is not necessarily modified by the measurement. Again, the modifications implied by the QM would therefore not concern the measured physical object but the mathematical probabilities associated with the physical object.

・ There is a kind of wave function collapse but limited to half-interval reduction, and this half-interval reduction does not change the physical object but concerns the probability distribution of the possible source objects than can represent a specific state.

・ Each trial in the simulation is one single state and not a states superposition. But the set of trials follows a random statistical succession of single-states that gives a superposition of states, only statistically. So, there would have no Schrödinger’s cat both dead and alive.

・ The non-linear idealization by extended objects (not reducible to a point) allows evolving the principle of locality to a principle of non-locality in a realistic way (in accordance with the QM and with the well-verified principles which are the basis of physics). This principle of locality or non-locality becomes more a principle of proximity a little bit like the mathematical notion of neighborhood in the topological spaces.

Our idealization goes beyond the classical framework, because it is based on the nonlinear notion of extended object. The existence of the sets of non-detections and of detections, the capability to define complementary/dual experiments and the non-linear notion of extended objects certainly justify the necessity of using complex number. The extended object could also be the notion that would explain the wave-particle duality, sharing the internal capacity of wave propagation and the external capacity of being seen as a particle with thickness (in the meaning of an interaction that would request a strict inclusion of the source interval inside the detector interval). And this simulation lets us think that:

・ The ket would represent an equivalence class of all extended objects that would give a same result (the same eigenvalue) for a measurement (for an observable).

・ The components of kets (the wave function) would define the probability distribution of having any extended object of the whole class seen as one block, one black box (indiscriminately among the possible extended objects of that state) but would not define the probability of having one specific extended object of the class. And the important fact to add is that the probability of happening for each individual source objects (among that state) are not the same, but is unknown or unused in the QM idealization. So only the probability distribution on the eigenvector (the whole equivalent class seen as one set) would determine the wave function.

This idealization is not a theory with local hidden variables, but the variables describing the system are impossible to determine as accurately as desired (because they are not points), we can only access an interval of possible values. The variables are then not local. The theories with local hidden variables are trying to obtain a well-determined value on an object which would be intrinsically defined by an interval of values. It is as if we wanted to obtain the area and position of a country. Area is a characteristic that can be reduced to a value, but the position of a country can only be defined over an interval. Each parcel included in this country can be a valid position, but a parcel partially included or entirely outside the country are either undefined position or an exterior position. So, this study is entirely in agreement with Bell’s theorem and in a certain point of view confirms it even more because until now, physically “only” the obtention of the violation of the Bell’s inequality has been proved to confirm the necessity of the reality’s description with non-local hidden variables (but without proposed an idealization). Here, we demonstrate that inversely we can define a specific idealization of effectively non-local hidden variables which violates the Bell’s inequality. Furthermore, in our idealization, these variables could almost not be qualified as hidden, because they are the expected variables of MQ. But it is true that they still contain more information that the classical idealization (an interval is richer than a vector) and it can be seen like hidden data.

One can also remark that QM is expected to concern microscopic objects, but our simulations with extended objects also concern macroscopic objects.

This modeling is certainly only one representation of a very large class of new models that could explain others experiences of QM. Our study suggests that we can obtain many other values of Bell’s inequalities. As shown in the curves Figures 8-10, Figure 12 it should be achieved by a curve of E ( a , b ) different from that of MQ. One can certainly imagine a theory (or more simply a modeling) with other forms of probability distribution (other than cos/sin). Other models can also be considered “topologically similar” to the one studied in this article (by modifying the division into sectors of the detectors and simultaneously by adapting the source objects). But we can also imagine digital experiments which seem difficult to achieve experimentally. Indeed, from this simulation, we can easily obtain the results of EPR experiments with detectors of more than 2 values or even EPR experiments always at 2 values but with entanglements at 3, 4,… photons, i.e. with 3, 4, … detectors to no longer obtain couples (±1, ±1) but triplets (±1, ±1, ±1), quadruplets, ... (entangled states of the type | or | …).

This study, through these extended objects, hopes to have proposed a new way for interpreting quantum mechanics in a more conventional way. But many questions regarding the interpretation of other quantum phenomena remain to be explored (teleportation, encryption, quantum computing, etc…). And certainly, it would be useful to get back to basic experiments to better understand these extended source objects, such as the experience of Young’s slits which highlights the wave-particle duality of the QM: Can a physical object idealized by QM be magnified as an extended object? But also, many questions to go beyond the current theory: Is a probabilistic approach really required as in MQ? Indeed, in a classical approach, the probabilities are used when we have not enough data. But with our simulation and extended objects, the problem rather seems coming because we have too much data (due to continuous intervals) for which we have no tools to manipulate them rigorously, except a probabilistic approach. Developments of mathematical nonlinear tools around the notion of extended object will certainly be required.

Supplementary Data

The files mentioned in this paper are downloadable from HAL data repository with the following links. If the downloads fail, copy and paste the links in a web browser:

01-ONE randomly oriented detector.xlsx: https://hal.archives-ouvertes.fr/hal-03334412v4/file/01-ONE_randomly_oriented_detector%20%281%29.xlsx

02-ONE detector oriented in ONE direction.xlsx: https://hal.archives-ouvertes.fr/hal-03334412v4/file/02-ONE_detector_oriented_in_ONE_direction%20%281%29.xlsx

03-TWO detectors oriented in TWO directions.xlsx: https://hal.archives-ouvertes.fr/hal-03334412v4/file/03-TWO_detectors_oriented_in_TWO_directions%20%281%29.xlsx

04-Entanglement E(a,b).xlsx: https://hal.archives-ouvertes.fr/hal-03334412v4/file/04-Entanglement_E%28a%2Cb%29%20%281%29.xlsx

05-Entanglement and violation of Bell’s inequalities.xlsx: https://hal.archives-ouvertes.fr/hal-03334412v4/file/05-Entanglement_and_violation_of_Bell_inequalities%20%281%29.xlsx

06-ONE detector oriented in ONE direction-Fixed spacing.xlsx: https://hal.archives-ouvertes.fr/hal-03334412v4/file/06-ONE_detector_oriented_in_ONE_direction-Fixed_spacing%20%281%29.xlsx

07-Entanglement E(a,b)-Fixed spacing.xlsx : https://hal.archives-ouvertes.fr/hal-03334412v4/file/07-Entanglement_E%28a%2Cb%29-Fixed_spacing%20%281%29.xlsx

08-Entanglement and violation of Bell’s inequalities-Fixed spacing.xlsx : https://hal.archives-ouvertes.fr/hal-03334412v4/file/08-Entanglement_and_violation_of_Bell_inequalities-Fixed_spacing%20%281%29.xlsx

Conflicts of Interest

The author declares no conflicts of interest.

References

[1] Kracklauer, A.F. (2007) Nonlocality, Bell’s Ansatz, and Probability. Optics and Spectroscopy, 103, 451-460. https://doi.org/10.1134/S0030400X07090147
[2] Aspect, A. (2015) Closing the Door on Einstein and Bohr’s Quantum Debate. Physics, 8, 123. https://doi.org/10.1103/Physics.8.123
[3] Gisin, N. and Gisin, B. (1999) A Local Hidden Variable Model of Quantum Correlation Exploiting the Detection Loophole. Physics Letters A, 260, 323-327. https://doi.org/10.1016/S0375-9601(99)00519-8
[4] Le Corre, S. (2020) Study about Non-Linear Structures. Open Access Library Journal, 7, e6726. https://doi.org/10.4236/oalib.1106726

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.