An Expansion of Current Loopholes in Bell Experiments

0
212

Abstract

This paper reviews and expands upon so-called loopholes” in Bell experiments, factors that can be exploited by locally-causal hidden variable theories (LHVTs). We find that no experiments to date have adequately minimized the impact of photon loss on their experiments, generating a regression that shows that current detection efficiencies of 95% require less than 0.28 dB of loss. Noise correction methods such as time-division multiplexing and the use of heralded amplifiers are proposed to counteract this. The locality loophole is shown to be unclosed as well, with random number generators (RNGs) currently failing to show true measurement independence in their selection of numbers. The use of star luminosity data from cosmic bodies at the particle horizon (\approx47 billion light-years away) for the randomization of measurement settings is thus recommended to sufficiently close this loophole. A new mathematical model for device memory is proposed that accounts for the distance between measurement sites and it is proven that current, source-equidistant setups allow for the settings from the (n-1)^{th} particle to be transmitted to other measurement devices. The collapse-locality loophole is considered and shown to be unclosed by current experiments for calculated collapse times in the Ghirardi-Rimini-Weber and Von Neumann-Wigner collapse models, while the Continuous Spontaneous Localization and Di\'{o}si-Penrose models have been individually closed without consideration of other loopholes. A comprehensive criteria for a sufficientlyloophole-free” Bell experiment is proposed, involving rarely-considered factors such as minimizing measurement time and maximizing particle speed within experimental channels.

Since the advent of quantum mechanics, physicists have questioned the legitimacy of its implications about the natural world. The main such controversy arises from the principle of local causality, the notion that any given event can only affect objects in its relative proximity. This can be further expressed through the principle of nonsuperluminal signalling, that nothing travels faster than light; it intuitively follows that no event can affect something at a distance in spacetime that light could not have reached from that same position (x,t).

Introduction

That claim was challenged, however, when quantum mechanical predictions showed pairs of entangled particles affecting one another: if an observer measured the state of one qubit, they could deduce that of its partner regardless of their physical separation to it1. This teleportation” generated immense controversy, leading to the formulation of Bell’s theorem, in which a model was proposed that involvedhidden variables” that replicated quantum predictions without violating local causality2. The resulting wave of experiments testing Bell’s theorem all showed a clear violation thereof, implying that quantum mechanics is an accurate portrayal of the universe3.

This paper explores “loopholes” in Bell experiments, flaws in methodology that allow for the possibility of a locally-causal theory. It is of utmost importance that a truly loophole-free Bell experiment be conducted due to the applications of quantum mechanics in other fields. Transportation of qubits through long-distance channels is imperative for the progression of quantum computing4; the ability to correct channels for noise carries is necessary for cryptography, leading to consequences in the defense and communication industries5; and further exploration of the collapse-locality loophole can advance our understanding of quantum measurement and gravity6 ^,7. Loophole-free Bell inequality violation manifests itself as a unique – and to many, unnecessary – challenge to experimentalists, but pursuing it will catalyze worthwhile advances in industries that may fundamentally change the future.

Subsequent sections will attempt to answer the question: does our current understanding of loopholes account for all reasonable cases, and what experimental processes could be added to future tests to decisively reject local causality? First, a general background of quantum mechanics is provided, with an introduction into Bell’s theorem and the resulting Clauser-Horne-Shimony-Holt (CHSH) inequality8. Then, loopholes are explored both mathematically and conceptually, leading to an expansion of our current understanding of whether or not our experiments are loophole-free. Afterwards, technological progress in Bell experiments is discussed and propositions are made to improve the quality of future experiments such that they close still-outstanding loopholes.

Background

This section will outline the most crucial concepts involved in Bell experiments. In subsection 2.1, quantum states and measurements are defined using the idea of a Hilbert space and the Bloch Sphere visualization. Then, example measurements of pure and mixed states are given to acquaint the reader with the relevant matrix operations. Subsection 2.2 introduces the idea of entanglement and establishes the resulting contradiction between quantum mechanics and local causality. In subsection 2.3, Bell’s LHVT model is obtained through a simple statistical discussion, and in subsection 2.4, the CHSH inequality and its maximum, Tsirelson’s Bound, for determining violations of LHVT in Bell experiments are defined.

Quantum States

Quantum mechanics concerns itself with the properties and interactions of objects at a very small scale. The systems described by it – whether electrons, groupings of molecules, or otherwise – are defined by a mathematical entity known as a quantum state. This state takes the form of a complex vector |\psi\rangle in a Hilbert space. Quantum systems can have multiple possible states: for example, a photon can be polarized vertically or horizontally. In cases such as these, the system becomes a superposition of states, where the total quantum state becomes a linear combination of the possibilities:


    \[\label{eq:1}|\psi\rangle=a|\chi\rangle+b|\phi\rangle,\]



where a and b are arbitrary complex numbers such that |a|^2+|b|^2=1, and |\chi\rangle and |\phi\rangle are orthonormal. Equation ?? is specified to the scenario of photon polarization, which has just two possible states (|\chi\rangle and |\phi\rangle), but there is no theoretical bound on the number of superimposed states with the condition that each is accompanied by a constant normalized to 1 in relation to the others. The most common, elementary quantum system, the qubit, is a computational analog to Equation ??, where any qubit state is:


    \[\label{eq:2}|\psi\rangle=a|0\rangle+b|1\rangle.\]

When measured in the orthonormal basis {|0\rangle,|1\rangle}, the probability of obtaining state |0\rangle is |a|^2, or |b|^2 for |1\rangle. These two states are two-dimensional vectors that can be expressed more specifically as matrices
|0\rangle=\begin{pmatrix}1\0\end{pmatrix} and |1\rangle=\begin{pmatrix}0\1\end{pmatrix}.

The Bloch Sphere. Image by Smite-Meister, obtained under a9 CC BY-SA 3.0 license


Two-level quantum systems, defined by the qubit, can be visualized on the Bloch Sphere (see Figure ??). This geometric representation exists within a two-dimensional Hilbert space, with each state vector starting at its origin and being represented by a density matrix \rho. For the poles, qubit states |0\rangle and |1\rangle, the corresponding density matrices are |0\rangle\rightarrow{|0\rangle\langle0|}={\begin{pmatrix}1 & 0\end{pmatrix}}\begin{pmatrix}1\0\end{pmatrix}=\begin{pmatrix}1 & 0\0 & 0\end{pmatrix} and |1\rangle\rightarrow\begin{pmatrix}0 & 0\0 & 1\end{pmatrix}. For any pure state, the outer product with itself:

(1)   \begin{equation*} \rho=|\psi\rangle\langle\psi|\end{equation*}


gives its density matrix. The Bloch Sphere also contains mixed states, requiring a generalization for all possible state vectors. The corresponding density matrix for any quantum state is:

(2)   \begin{equation*} \rho=\frac{1}{2}(I+\vec{r}\cdot\vec{\sigma}),\end{equation*}


where \rho\geq0 and \Tr{(\rho)}=1, meaning the matrix is positive semidefinite (\langle\psi|\rho|\psi\rangle\geq0, meaning the matrix-vector product times the conjugate transpose \psi^\dagger is greater than or equal to zero) and the sum of its diagonal elements from top-left to bottom-right, a_{11}+a_{22}+…+a_{nn}, is one; I is the identity matrix \begin{pmatrix}1 & 0\0 & 1\end{pmatrix}; \vec{r} is a vector (r_x,r_y,r_z) giving the coordinate of the matrix for each axis of the sphere; and \vec{\sigma} are the three Pauli matrices \sigma_x=\begin{pmatrix}0 & 1\1 & 0\end{pmatrix}, \sigma_y=\begin{pmatrix}0 & -i\i & 0\end{pmatrix}, and \sigma_z=\begin{pmatrix}1 & 0\0 & -1\end{pmatrix}. Density matrices that result in rays reaching the surface of the Bloch Sphere from the origin (and thus satisfy r_x^2+r_y^2+r_z^2=1) represent pure states, while those within the sphere are mixed.

In order to determine the state of a quantum system, one must subject it to a process known as measurement. This is the act of observing it in some way to discern some information; it is neither a conclusive action nor a passive one, as it is impossible to know all information about a quantum system by the Uncertainty Principle, and measurement has been shown to fundamentally change systems from their pre-measurement states. Specifically, physical measurement causes a collapse of a system in superposition into a singular, definitive state, while the mathematical process yields a probability that a certain outcome will be obtained from a state measured in a particular basis. For the basis {|b_0\rangle,|b_1\rangle}, the probability of obtaining outcome j is given by:

(3)   \begin{equation*} P_j=\langle\psi|b_j\rangle\langle{b_j}|\psi\rangle=|\langle\psi|b_j\rangle|^2.\end{equation*}


This measurement can be applied to both mixed and pure states. Physically, a mixed state \phi is a statistical ensemble of different states with some classical uncertainty, e.g. one in the basis {|0\rangle,|1\rangle} with a predetermined probability of \frac{1}{2} for each state. (This one in particular is a completely mixed state, meaning we know nothing about the system; in the Bloch Sphere, this is the origin). A pure state |\psi\rangle differs from this in that it cannot be written as a convex combination of other pure states; it is an ensemble of identical systems. Examples of density matrices for these systems are:


(4)   \begin{equation*} \rho_{\phi}=\frac{1}{2}(|0\rangle\langle0|+|1\rangle\langle1|)=\frac{1}{2}\begin{pmatrix}1 & 0\0 & 1\end{pmatrix},\end{equation*}

\label{eq:7}

(5)   \begin{equation*}\rho_{\psi}=\frac{1}{2}(|0\rangle\langle0|+|0\rangle\langle1|+|1\rangle\langle0|+|1\rangle\langle1|)=\frac{1}{2}\begin{pmatrix}1 & 1\1 & 1\end{pmatrix}.\end{equation*}


These states can be shown to be fundamentally different by measuring them in the projective “plus” basis state |+\rangle\rightarrow\frac{1}{2}\begin{Bmatrix}1 & 1\1 & 1\end{Bmatrix}, which yields:

(6)   \begin{equation*} \Tr(\rho_\phi|+\rangle\langle+|)=\Tr(\frac{1}{2}\begin{pmatrix}1 & 0\0 & 1\end{pmatrix}\cdot\frac{1}{2}\begin{Bmatrix}1 & 1\1 & 1\end{Bmatrix})=\frac{1}{4}\Tr(\begin{pmatrix}1 & 1\1 & 1\end{pmatrix})=\frac{1}{2},\end{equation*}


(7)   \begin{equation*} \Tr(\rho_\psi|+\rangle\langle+|)=\Tr(\frac{1}{2}\begin{pmatrix}1 & 1\1 & 1\end{pmatrix}\cdot\frac{1}{2}\begin{Bmatrix}1 & 1\1 & 1\end{Bmatrix})=\frac{1}{4}\Tr(\begin{pmatrix}2 & 2\2 & 2\end{pmatrix})=1.\end{equation*}

These results can be intuited without such an explicit calculation. State \rho_{\phi} is completely mixed (i.e. we know nothing about it), so its density matrix takes the form of half of the identity, \frac{1}{2}I; this means it causes no real change to the |+\rangle matrix, leading to the solution found in Equation 6. The pure state |\psi\rangle, on the other hand, is equivalent to |+\rangle, meaning the measurement being done in Equation 7 is simply the probability of finding the plus state when measuring in the plus state, one.

Entanglement

When multiple independent quantum systems come together, they form a composite system described by a product quantum state:

(8)   \begin{equation*} |\psi\rangle=|\psi\rangle_1\otimes|\psi\rangle_2\otimes…\otimes|\psi\rangle_n.\end{equation*}


This allows for the description of a composite system in terms of its individual parts. The tensor product operation combines concurrent systems’ density matrices to obtain a singular product state for the system at-large:


(9)   \begin{equation*} \begin{bmatrix}a_1 & a_2 \\a_3 & a_4\end{bmatrix}\times\begin{bmatrix}b_1 & b_2 \\b_3 & b_4 \\\end{bmatrix}=\begin{Bmatrix}a_1\begin{bmatrix}b_1 & b_2 \\b_3 & b_4 \\\end{bmatrix} & a_2 \begin{bmatrix}b_1 & b_2 \\b_3 & b_4 \\\end{bmatrix} \\a_3\begin{bmatrix}b_1 & b_2 \\b_3 & b_4 \\\end{bmatrix} & a_4\begin{bmatrix}b_1 & b_2 \\b_3 & b_4 \\\end{bmatrix}\end{Bmatrix}.\end{equation*}


Despite this supposed mixing of states, the probability of each outcome occurring remains the same; they remain independent. In the composite state |\psi\rangle=|0\rangle\otimes|0\rangle, a measurement in the bases |\pm\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle) for both systems leads to the probability:


(10)   \begin{equation*} |(\langle0|\otimes\langle0|)(|\pm\rangle\otimes|\pm\rangle)|^2=\frac{1}{4}.\end{equation*}

There are, however, systems that cannot be defined in terms of independent state vectors. This phenomenon is known as quantum entanglement, and occurs when the states of different systems correspond to each other in some way such that a change in the properties of one results in a change in those of the other(s). The entangled state |\psi\rangle=\frac{1}{\sqrt{2}}(|0\rangle\otimes|0\rangle+|1\rangle\otimes|1\rangle), for example, cannot be separated into a tensor of independent systems. A measurement shows the distinction between this entangled state and the composite one shown in Equation 10:


(11)   \begin{equation*} |\langle\psi|(|0\rangle\otimes|0\rangle)|^2=\frac{1}{2}=|\langle\psi|(|1\rangle\otimes|1\rangle)|^2,\end{equation*}


(12)   \begin{equation*} |\langle\psi|(|0\rangle\otimes|1\rangle)|^2=0=|\langle\psi|(|1\rangle\otimes|0\rangle)|^2.\end{equation*}


In this case, the states have been prepared in such a way that both entities must be in the same state. If one is measured, then the state of the other becomes known. This presents a key problem in quantum mechanics: measurement at a discrete location, which actively results in an outcome (rather than simply observing the occurrence of one), causes a change in both entangled particles regardless of their respective locations. Intuitively, this seems to constitute a violation of the principle of nonsuperluminal signalling, that information cannot travel faster than the speed of light. In reality, however, this principle is consistent with quantum theory, as there is no message being sent faster than light. When a measurement is performed at region A, the state of the particle at B does not become known to either party; a separate observer at B still needs to perform their own measurement or wait for a classical message to arrive from A in order to obtain information on the state. Instead, this undermines local causality, that events can only affect objects within their light cones (see Figure fig:2). This “spooky action at a distance” is one of the main arguments levied against quantum mechanics, with the 1935 Einstein-Podolsky Rosen (EPR) paper that introduced the concept of entanglement using it to claim that it is an incomplete theory10.


Violation of local causality can be further illustrated by an EPR thought experiment, depicted in Figure ??, involving an entangled photon pair in the maximally-entangled Bell state |\Psi^-\rangle=\frac{1}{\sqrt{2}}(|0\rangle|1\rangle-|1\rangle|0\rangle). The particles are sent in opposite directions to spacelike separated regions A and B equidistant to the source; when a photon arrives at A, measurement a is performed on it to obtain its polarization, outcome x. The second photon must have an orthogonal polarization to its counterpart, so the measurement at A instantaneously collapses its state at B regardless of the distance between them. (The same phenomenon occurs if measurement b is done on a photon to obtain outcome y). This irreconcilability between quantum mechanics and local causality gave way to the idea of a locally-causal hidden variable theory (LHVT), in which some thus far undiscovered factors facilitate local interactions that we perceive as occurring over a spacelike separated distance.


Bell’s Theorem

In 1964, Bell proposed a universal mathematical formalism for LHVTs and proved that they are fundamentally incompatible with the predictions of quantum mechanics2. He did this using three main assumptions: there are hidden variables \lambda that determine the probabilities of different measurement outcomes; the outcomes of measurements in spacelike separated regions are independent of one another (i.e. local causality); and the choice of measurement in each region occurred independently within it. The formula for conditional probability given by Bayes’ theorem:


(13)   \begin{equation*} P(A|B)=\frac{P(B|A)P(A)}{P(B)}\end{equation*}


can be expanded for multiple variables and used to calculate the probability of obtaining outcomes a and b given measurements x and y with the presence of \lambda:


(14)   \begin{equation*} P(xy|ab)=\int{d\lambda{P(xy|ab\lambda)}P(\lambda|ab)}.\end{equation*}


Outcomes in a LHVT depend only on local measurements, meaning that measurement settings in spacelike separated regions are statistically independent to them. Applied to the EPR thought experiment, x and y occur independent of b and a respectively. Furthermore, the assumption that measurement settings are chosen as free variables independent of \lambda nullifies any statistical correlation implied in Equation 14. In conjunction, these features allow us to separate:


(15)   \begin{equation*} P(xy|ab\lambda)=P(x|a\lambda)P(y|b\lambda),\end{equation*}


(16)   \begin{equation*} P(\lambda|ab)=P(\lambda),\end{equation*}


and substitute into Equation 14 to obtain a final equation for a LHVT:


(17)   \begin{equation*} P(xy|ab)=\int{d\lambda{P(x|a\lambda)}P(y|b\lambda)P(\lambda)}.\end{equation*}


CHSH Inequality

This model can be shown to be inconsistent with the expectation values derived from quantum mechanics through an experiment proposed in 1969 to test the validity of Bell’s theorem8. As in Figure ??, measurements on entangled particles are done in spacelike separated regions, but there are two possible measurement conditions (bases corresponding to the arbitrary numbers 0 and 1) and outcomes (qubit states) at each region. A simplification of Equation 17 arrived at via that experimental setup results in the Clauser-Horne-Shimony-Holt (CHSH) inequality:


(18)   \begin{equation*} S=|E_{00}-E_{01}+E_{10}+E_{11}|\leq2,\end{equation*}


where E_{ab} is the expectation value for that particular combination of basis and outcome, E_{ab}=\langle\psi|b_ab_b|\psi\rangle. For LHVTs to work, the inequality must experimentally hold true, and any violation is evidence (though not conclusive) that quantum mechanics offers an accurate description of reality. If analogized to the spin of the particles in an electron-positron pair, where they both occupy the singlet state |\Psi^-\rangle with |0\rangle\rightarrow|\uparrow\rangle and |1\rangle\rightarrow|\downarrow\rangle, and the possible basis states are |b_0\rangle=\cos(\frac{\pi}{4})|\uparrow\rangle+\sin(\frac{\pi}{4})|\downarrow\rangle and |b_1\rangle=-\sin(\frac{\pi}{4})|\uparrow\rangle+\cos(\frac{\pi}{4})|\downarrow\rangle, the probability of basis state |b_0\rangle given outcome |\uparrow\rangle is found using the Born Rule:


(19)   \begin{equation*} P(b_0|\uparrow)=|\langle\uparrow|b_0\rangle|^2=\cos^2(45\degree)=\frac{1}{2}.\end{equation*}


The chance of having either outcome |\uparrow\rangle or |\downarrow\rangle is always 50%, as the singlet state is maximally entangled and the spins of the electron and positron must be opposite. From this, S for this scenario can be calculated as:


(20)   \begin{equation*} S=\frac{1}{2}(|\langle\uparrow|b_0\rangle|^2-|\langle\uparrow|b_1\rangle|^2+|\langle\downarrow|b_0\rangle|^2+|\langle\downarrow|b_1\rangle|^2)=1.\end{equation*}

The maximum violation of of the CHSH Inequality is given by a specific value known as Tsirelson’s Bound, S_{max}=2\sqrt{2}11. This can only be obtained through the rotation of the bases to optimal angles \theta, a parameter that causes significant fluctuations in the expectation values obtained. Variation in \theta implies a generalization of the chosen bases that allows for different measurement outcomes. For |b_0\rangle and |b_1\rangle used above, these become:


(21)   \begin{equation*} |b_0\rangle=\cos(\frac{\alpha}{2})|\uparrow\rangle+\sin(\frac{\beta}{2})|\downarrow\rangle,\end{equation*}


(22)   \begin{equation*} |b_1\rangle=-\sin(\frac{\alpha'}{2})|\uparrow\rangle+\cos(\frac{\beta'}{2})|\downarrow\rangle,\end{equation*}


where angles \alpha=0\degree, \alpha'=90\degree, \beta=45\degree, and \beta'=135\degree achieve Tsirelson’s Bound and E_{ab}, E_{ab'}, E_{a'b}, and E_{a'b'}=\langle\psi|ab|\psi\rangle, etc. for all Bell experiments are the expectation values for different combinations of measurement angles corresponding to those in Equation 18. Theoretically, this dismantles the validity of any LHVTs, but in reality there are various problems preventing current experimental evidence from decisively rejecting local causality.

Loopholes

In order to definitively prove quantum theory, Bell experiments must eliminate any confounding variables that could reasonably be exploited by LHVTs. These loopholes” are defined as imperfections in experimental processes that compromise the strictly quantum nature (i.e. render feasible local causality) of a Bell test attempting to violate the inequality.

Detection

A significant practical limitation in any Bell experiment is the ability of the apparatus being used to measure a fair sample of the involved particles, first discussed by Philip Pearle12. Typically, an avalanche of photons is sent to a photodetector, which produces a click upon registering the presence of a photon. The states of detected photons (either outcomes 0 or 1) are then used to calculate S in a CHSH scenario. Practically, no photodetectors have unit efficiency, and particles can be lost along the channels through which they are being transported, meaning there is an additionalno-click” outcome that occurs when a measurement on a photon is inconclusive. In order to simply discard no-click outputs and only consider conclusive measurements, the sample of particles measured must be shown to be statistically representative of the avalanche in totality.

To avoid this problem, the no-click outcome is assigned to one of the conclusive ones (0 or 1). Otherwise, it is possible for post-selection (the act of discarding no-click outcomes) to allow for a fake demonstration of Bell inequality violation, failing to reject LHVTs3. Using this method and a judicious measurement, the CHSH-value S when both detectors click (which, if the particles are entangled, should give outcomes 0 and 1) is 2\sqrt{2}, representing maximal entanglement. Given that the detection efficiency of a photodetector, or the probability that it will detect a given particle, is \eta, the probability of this occurring is \eta^2. Similarly, the S-value when neither of the detectors give a conclusive outcome (meaning both are counted as either 0 or 1) is 2, with probability (1-\eta)^2, calculated by substituting expectation values for Equation 18. If only one detector clicks, there is no correlation between the two states (S=0), as the no-click outcome is assigned 0 or 1 independent of its conclusive counterpart. To remove the possibility of a LHVT, the expected S-value must be greater than 2:

(23)   \begin{equation*} \eta^22\sqrt{2}+(1-\eta)^22>2.\end{equation*}

An efficiency threshold \eta^ can thus be calculated in order to legitimize any calculated Bell inequality violation13:

(24)   \begin{equation*}  \eta>\eta^=\frac{2}{1+\sqrt{2}}\approx82.84\,\%.\end{equation*}

This value assumes that \eta was accounted for after optimizing the angles of measurement; correcting the equations for expectation values E_{ab} beforehand allows for a lower bound of \eta^*=\frac{2}{3}, known as Eberhard’s Bound14. Fluctuations in the values shown by detectors as a result of photon loss or some form of extraneous interference, on the other hand, increase the efficiency required to close the detection loophole. This noise/background (\zeta) forces most detection loophole-free experiments to use detectors with far greater efficiencies than the theoretical minimum, as shown in Figure ?? and Table ??.


Maximum background allowed for detection loophole-free inequality violation as a function of photodetector efficiency.

Eberhard’s Bound only applies to a standard, two-measurement model as depicted in Figure ??. For any number of measurement settings possible for the apparatuses at regions A and B, the minimum efficiency is:

(25)   \begin{equation*} \eta^\geq\frac{M_A+M_B-2}{M_AM_B-1}, \end{equation*}

which can be shown to satisfy \eta^\geq\frac{2}{3} for the CHSH model M_A=M_B=215. (M\geq2 for all cases, as possible measurement condition cannot test for Bell inequality violation). From this result, we find:


(26)   \begin{equation*} \lim_{M\to\infty}\eta^=0. \end{equation*}

Loophole-free measurement scenarios with entangled systems of large dimension d (requiring M\gg2) and exponentially small values of \eta have been proposed, but such experiments are far too complex and unnecessary given the ability of photodetection technology to create loophole-free conditions with M=2. Likewise, the number of regions N that the particles are sent to also decreases the necessary detection efficiency. The relation between N and \eta^, accounting for M, has been proven for N\leq500 to be15:


(27)   \begin{equation*} \eta^*\geq\frac{N}{(N-1)M+1}.\end{equation*}

These scenarios all depend upon the efficiency of all detectors being equivalent. This is not always the case, as certain experiments use hybrid particles and/or detection methods. For simple, two-dimensional situations where \eta_A\neq\eta_B, we can derive a relation between imbalanced detectors using the Clauser-Horne (CH) inequality16:

(28)   \begin{equation*} P(ab)+P(a'b)+P(ab')-P(a'b')-P(a)-P(b)\leq0,\end{equation*}

where P(ab)=\eta_A\eta_B is the probability that both detectors fire given remote measurement settings a and b, and P(a)=\eta_A is the probability that a conclusive outcome is found at A without a corresponding measurement at B being required. Substituting \eta_A and \eta_B for these probabilities, the resulting inequality:

(29)   \begin{equation*} 3\eta_A\eta_B-\eta_A-\eta_B>0\end{equation*}

must be fulfilled to demonstrate Bell inequality violation17. Isolating \eta_B, this simplifies to a final expression:

(30)   \begin{equation*} \eta_B>\frac{\eta_A}{3\eta_A-1}.\end{equation*}

This inequality approximates Eberhard’s Bound for the case \eta_A=\eta_B>\frac{2}{3}. As one detector approaches unit efficiency, the other must have \eta>\frac{1}{2}, creating an absolute lower bound for a singular detector’s efficiency in a two-measurement scenario. See Figure ?? for the variation in minimum \eta_B as a function of \eta_A.


Rendered by QuickLaTeX.com


Lower bound of \eta_B for 0.5<\eta_A\leq1.

Time-Coincidence

Another limitation of detection mechanisms in Bell experiments is the time differential between measurements of entangled pairs at regions A and B. There is usually a predetermined coincidence window, or maximum period of time \Delta{T} between two measurements, that must be adhered to in order to classify them as being “simultaneous.” This brief window creates a possibility that the local measurement settings in both regions, which affect the time of measurement, determine whether or not particles are correctly treated as pairs. The time-coincidence loophole, in which a possibly unfair sample of data is collected due to the discarding of what should have been counted as a coincidence, must thus be closed to further undermine the possibility of a LHVT.\

In a LHVT, the times of detection for each region, T and T', are functions of \lambda. Any coincidence is then explained by certain values thereof that cause |T(\lambda)-T'(\lambda)|<\Delta{T}. In order to close this loophole, Bell’s inequality must be violated while the probability of coincidence, \gamma, is sufficiently large18:

(31)   \begin{equation*}  \gamma>3-\frac{3}{\sqrt{2}}\approx87.87\,\%.\end{equation*}


This requires a substantial portion of entangled pairs to be detected, obligating Bell experiments to use apparatuses with little margin of error in their measurement times in order to maintain an accurate record of the particles’ correlation.

Locality

A quintessential component in any Bell experiment is an inability for any classical message to be sent between the involved regions before the detection of the chosen particles. This also implies that the measurement settings a and b were generated independently within their respective regions in spacetime, i.e. that they are “free variables” outside of the causal past of one another. If these conditions, which correspond to Equations 15 and 16, are not met, it is possible for some \lambda to be present that facilitates any experimentally-found correlations. This is referred to as the locality loophole in Bell experiments.

The former condition is simple to achieve: to prevent a nonsuperluminal signal from being able to relay information between different regions, they must be spacelike separated. While this has been proven to be experimentally difficult in the past, contemporary Bell experiments have consistently achieved this. The measurement-independence criteria requires some form of randomization for the measurement settings to ensure that they are uncorrelated, as well as a mechanism to prevent each setting from being communicated to the other regions. To prevent this transmission of information, Bell experiments have made use of switching devices that alter the measurement settings at rapid rates as the particles are incoming.

Memory

Randomization of measurement settings leads to the possibility that the measurement devices used have some imprint or “memory” of each prior setting that informs its measurement of the next particle. While the settings may be chosen independently, the actual measurements being done may be altered by this memory, undermining the noncausal conclusions of quantum mechanics. Memory M of prior measurements causes an expansion of Bell’s LHVT, Equation 17, into a new model for the n^{th} successive particle19:

(32)   \begin{equation*} P(x_ny_n|a_nb_n)=\int{d\lambda{P}(x_n|a_nM\lambda)P(y_n|b_nM\lambda)P(\lambda)},\end{equation*}

where the probabilities are altered by memory such that:

(33)   \begin{equation*} P(x_n|a_nM\lambda)=P(x_n|(a_1a_2…a_n)(x_1x_2…x_{n-1})\lambda).\end{equation*}

Depending on the separation between regions, it is also feasible that the memory of the outcomes and settings in one region can be transferred to another by some classical signal, creating a two-sided memory model:

(34)   \begin{equation*} P(x_n|a_nM\lambda)=P(x_n|(a_1…a_n)(x_1…x_{n-1})(b_1…b_{n-[\frac{L}{ct}]})(y_1…y_{n-[\frac{L}{ct}]})\lambda),\end{equation*}

where [\frac{L}{ct}] is the distance between the regions divided by the speed of light times the time it takes for a particle to reach a region from the source rounded down to the nearest integer (minimum of one), which is the delay in particles for which the memory thereof from region B can be transmitted at light speed to region A. (If 2>\frac{L}{ct}\geq1, then the memory of the setting b measuring particle n-1 at region B is the most recent one able to affect the measurement of particle n at region A). Note that the outcome x_n is always independent of setting b_n in the model, as the measurements continue to occur simultaneously in spacelike separated regions.\

In most experiments, Equation 34 would have [\frac{L}{ct}]=1, depending on the distance of the regions to the source and the speed of the particles being entangled. For an equidistant setup, the memory threshold would reach a maximum of 2 if the entangled particles are photons in a vacuum (i.e. travel at the speed of light), as t=\frac{L}{2c}, and \frac{L}{c(\frac{L}{2c})}=2. In reality, Bell experiments use mediums such as fibre-optic cables to transport particles, making the speed of the photons v dependent on a particular channel’s index of refraction, n:

(35)   \begin{equation*} n=\frac{c}{v},\end{equation*}

where n>1 for all scenarios, as the speed of light in a vacuum can neither be matched nor surpassed by photons moving through matter. Asymmetric situations, in which each region is placed at a different distance from the source, may lead to different outcomes.

The memory loophole could be avoided in its entirety by using a different, spacelike separated measurement apparatus for each pair of particles sent from the source. In practice, Bell experiments require many trials (hence an “avalanche” of photons) in order to yield conclusive evidence of an inequality violation, rendering an experiment attempting to fully close it impractical.

Doing so may also be unnecessary. Using an alternative CHSH inequality in the form20:

(36)   \begin{equation*} S=|E_{ab}+E_{ab'}+E_{a'b}+E_{a'b'}|\leq3,\end{equation*}

it has been found that the two-sided memory model does not result in significant fluctuations in the LHVT upper bounds for expectation values19. The limit of S, for large amounts of particles tested, is thus given by:


(37)   \begin{equation*} \lim_{N\to\infty}S=3.\end{equation*}


While small values of N do result in noticeable changes in the predictions of the LHVT that may undermine the validity of an inequality violation of S>3, most Bell experiments automatically close this loophole through the magnitude of particles measured.

What is left open is the possibility that all measurements at each region could be conducted simultaneously, creating a new LHVT model that has predicted larger expectation values than those allowed for in models maintaining measurement-independence19. While storing the particles for a concurrent measurement at the same location does close the memory loophole, doing so may result in the transmission of information between particles in proximity, as well as make it impossible to spacelike separate detection regions due to the time necessary for all particles to be transported there.

Superdeterminism

Locality is considered by most to be a loophole impossible to close completely. It is unclear whether or not our best randomization techniques output values exactly as they are generated at t=0 or at some time t<0 beforehand. Furthermore, there is an argument to be made that all events (the choice of measurement settings, the states prepared for a Bell experiment, etc.) are predetermined by a common event in all entities’ causal past, the inception of the universe in the Big Bang. This idea of superdeterminism” is as much a metaphysical position as a scientific one; it implies an absolute lack of free will, a rigid causal pathway for reality entirely at odds with the principles of quantum mechanics. There is nothing that can be done to empirically disprove this interpretation of reality, so any Bell experiment attempting to belocality loophole-free” must close it within the bounds of reason. Any LHVT deemed sufficiently conspiratorial” (as superdeterminism is by many scientists) need not be rejected by a Bell experiment.

Collapse

The locality loophole can be expanded by accounting for the element of collapse in a Bell experiment. While locations A and B may be spacelike separated, the locations at which the states of the particles being measured actually collapse may not be (see Figure ??). Thiscollapse-locality loophole” can be exploited by a LHVT to invalidate prior experiments claiming to prove quantum nonlocal causality21.

There are various models that propose different catalysts for the collapse of a system, each attempting to answer the quantum measurement problem.” This fundamental issue with quantum mechanics exists because the wave functions representing different systems do not describe reality (they are instead the sums of different possibilities, only one of which truly manifests itself upon measurement). Many theories involve a measurement-induced discrete jump (orcollapse”) from the continuous, deterministic evolution of the wave function defined by the time-dependent Schr\”{o}dinger Equation22:

(38)   \begin{equation*} \hat{H}\Psi(x,t)=i\hbar\frac{\partial}{\partial{t}}\Psi(x,t),\end{equation*}

where \hat{H} is the Hamiltonian operator representing the total energy of the system and \Psi(x,t) is the state vector of the system as a function of position and time. Every proposed model provides some equation for collapse or an alteration of Equation 38, leading to distinct mathematical formalisms allowing for unique experimental solutions to the collapse loophole23.

DP Model

One of these theories holds that wave function collapse is caused by gravity. Proposed by Diósi24 and Penrose25, this model posits that the process of measurement is completed when spacetime enters a superposition of geometries that are substantially different. For this to occur, sufficiently massive objects must be in a superposition of different configurations, creating contradictory, significant warps in spacetime. Under the Diósi-Penrose (DP) model, these distinct curvatures require a certain amount of energy to maintain in simultaneous, proportional to the mass of the object. High-energy (i.e. massive) systems are more unstable, leading to a gravity-induced collapse to a gravitational “ground state”; as a result, macroscopic entities collapse quickly, while smaller objects may exist in a superposition of spacetime for far longer.

Time and energy are conjugate variables related by the uncertainty principle26:

(39)   \begin{equation*} \Delta{E}\Delta{t}\geq\frac{h}{4\pi}.\end{equation*}

This relation is controversial in quantum mechanics, as there is no associated Hermitian operator for time. (Energy, for example, corresponds to the Hamiltonian operator \hat{H} used in Equation 38). \Delta{T} is thus typically defined as the average time taken for collapse, an alteration that is substantiated by its accurate use in the study of decaying states (e.g. for radioactive isotopes)27. This allows us to derive the approximate time of collapse T_C as a function of the uncertainty in energy E_G of the various states in a given superposition28:


(40)   \begin{equation*} T_C\approx\frac{h}{4\pi{E_G}}.\end{equation*}


Note that for a state such as in Equation ??, E_G is mathematically equivalent to the gravitational self-energy U of the difference between the mass distributions of the superimposed states |\chi\rangle and |\phi\rangle29.

The DP model loophole was closed in a Bell experiment by Salart et al. in which a piezoelectric crystal receiving signals from detected photons was placed in a spacetime superposition, and system collapse was determined to occur when the crystal’s gravitational force noticeably displaced a small Faraday mirror7. Using the substitution E_G=U, Equation 40 took the form:

(41)   \begin{equation*} T_C=\frac{3hV}{4\pi^2Gm^2d^2},\end{equation*}

where V was the volume of the Faraday mirror, m was its mass, and d was the total distance it moved as a result of the piezoelectric crystal. This equation can apply to Bell experiments in general, where the parameters listed above correspond to any given object being displaced by a relatively massive entity in superposition. Through this key generalization, we can close this loophole absolutely: if the DP model holds true, any experiment that maintains spacelike separation between the regions at which the system fully collapses according to the definition of T_C in Equation 41 is collapse-locality loophole-free.

GRW Model

In contrast with the DP model, the Ghirardi-Rimini-Weber (GRW) model30 involves a spontaneous collapse of the wave function rather than relying on variables such as spacetime geometry. In GRW, “localizations” of the wavefunction occur at a rate \lambda. These collapses have a certain localization distance r_C, which is the maximum distance a particle can appear to jump as the superposition gives way to a definitive state. The localization process is formalized through the equation31:

(42)   \begin{equation*} |\psi\rangle\rightarrow|\psi_x^i\rangle=\hat{L}_x^i|\psi\rangle,\end{equation*}

where \hat{L}_x^i is the operator applied to a state vector |\psi\rangle for a system with n distinguishable particles such that any given particle i is localized around the point in space x. Using Equation 3, we find that the probability density for particle i to collapse into outcome x is the norm squared of its corresponding state:

(43)   \begin{equation*} P_i(x)=||\psi_x^i||^2.\end{equation*}

The values of constants \lambda and r_C are determined such that the predictions of the GRW model align with what has been experimentally observed. While this seems arbitrary, as both variables rely on \textit{ad hoc} hypotheses rather than justified mathematical formulation, other theories have used similar methods and proven successful, such as in general relativity’s use of the cosmological constant, \Lambda. (While \Lambda was only temporarily used in relativity, it proved phenomenological parameters to be useful when it resurfaced with the emergence of dark energy). The generally agreed-upon value for \lambda is 10^{-16} localizations per second, which was used in the original GRW paper, though an alternative value of 10^{-8} has also been calculated by taking into consideration experimental data regarding the heating of the intergalactic medium32. r_C has a consensus value of 10^{-7} meters, a mesoscopic distance that leaves macroscopic systems largely unaltered while making collapse a significant process at conventionally “quantum” scales (e.g. electrons).

While the GRW model has not been explicitly tested in a Bell experiment, the setup of the experiment performed by Salart et al. allows for a collapse time to be calculated if the piezoelectric crystal and mirror are regarded as one system and all of their nucleons are distinguishable (GRW does not allow for identical particles). This assumption leads to the GRW collapse time6:

(44)   \begin{equation*} T_C=\frac{16r_C^2}{\lambda{N}d^2},\end{equation*}

where N is the number of nucleons in both the piezoelectric crystal and the mirror, and d is displacement of the entire system. For this particular experiment, the collapse time is 2\times10^{-4} seconds, longer than the DP estimate, meaning the GRW version of the collapse-locality loophole remains open.

CSL Model

Similar to GRW, the Continuous Spontaneous Localization (CSL) model33 involves a random reduction of the wave function caused by a noise field in spacetime \omega(x,t). The phenomenological parameters \lambda and r_C are kept identical between the two as well. What differentiates them is the discrete nature of the localization process; in GRW, collapse is an instantaneous jump that occurs at a random time, while in CSL it is a continuous event. CSL also allows for identical particles, meaning nucleons involved in any Bell experiment testing it need not be distinguished as they were in the calculation of Equation 44.

For an experiment such as that of Salart et al., the time of collapse under the CSL model can be expressed as34:

(45)   \begin{equation*} T_C=\frac{A}{4\pi\lambda{a^2}N^2},\end{equation*}

where A is the surface area of the Faraday mirror and N is the number of nucleons in the region of the mirror that does not face overlap between its superposition states. Using the parameter values discussed earlier, the time of collapse for this particular experiment is 10^{-8} seconds, faster than under the DP model; this indicates that the CSL version of the collapse-locality loophole is closed.

Consciousness

An especially controversial theory is that consciousness catalyzes collapse rather than an “objective” process. While other models rely on fundamental forces (e.g. gravity) or a spontaneous process, this hypothesis argues that physical observation through human perception is necessary to reduce a superposition of states. First proposed by von Neumann35, this idea results in many different formulations of how and when collapse occurs.

What causes consciousness is disputed by scientists. Some believe it to be a physical phenomenon caused by some (as of yet ill-defined) quantum mechanical or classical process, making it implausible for it to have the singular power to collapse superpositions; others see it as a spiritual or metaphysical force. Those of the latter camp, while generally a minority amongst physicists, can still be satisfied by an adequately prepared Bell experiment: an entangled particle could be sent to a station with a human observer in space (human reaction times mandate that one region of reception be further than any terrestrial point) while its partner is sent elsewhere on earth. Photons have been sent to space at distances upwards of 1400 kilometers before4, meaning a consciousness collapse-locality loophole could be pragmatically closed in the near-future.

Others have integrated subjective factors relevant to consciousness into spontaneous theories such as GRW and CSL. The mechanisms of perception (e.g. through photon interactions in the eye) result in their own physical processes, leading to rates of collapse as low as \lambda=10^{-19} s^{-1} being sufficient for human observation of a localization event36^,37. This would put our current experimental evidence several orders of magnitude away from being loophole-free. On the other hand, it can be argued that before the conscience registers an event, the observer begins to undergo some physiological reaction. (For example, a bright light may cause one to recoil before they gauge its physical state, collapsing the system quicker than in a model purely based in consciousness). In this case, a rapid collapse (and thus an increased lower bound for \lambda) may better represent reality, bolstering the claim that current Bell experiments are collapse-locality loophole-free.

Experimental Improvements

In recent years, large strides have been made in the methodology of Bell experiments. Photodetection technology has become sufficient enough to declare a class of results over the past decade detection loophole-free; techniques such as the use of superconducting circuits to transport particles, nitrogen-vacancy centers to easily read out spin states, and quantum noise correction to reduce photon loss through channels have improved the quality of Bell experiments; and different means of randomization have put forth compelling arguments for measurement-independence. Many physicists have deemed Bell experiments “complete,” claiming them to be loophole-free and decisive evidence of nonlocal causality. While they have certainly given substantial evidence against LHVTs, there is still work to be done, with the further elimination of noise, more convincing proof of the freedom of variables, and increased separation of regions to close all cases of the collapse-locality loophole still pending.

Noise

The detection loophole is one area where the need for experimental improvement is not entirely obvious; experiments for years have shown detection efficiencies that substantially clear the minimum threshold \eta^* for Equation 24 and Eberhard’s Bound, such as two experiments in 2013 with efficiencies of 75% and 98%38 ^,39. To determine if these are truly loophole-free, we generate an exponential regression of the data in Table ?? to find maximum noise, or photon loss, \zeta_{max} as a function of \eta:

(46)   \begin{equation*} \zeta_{max}=19444.3^{\eta-1.23},\end{equation*}

with a coefficient of determination R^2=0.98. Through this, we find that at a lower bound of \eta=0.95, photon loss must be limited to 6.3%. An efficiency of 95% was chosen due existing experiments far eclipsing this value, such as with the use of a transition edge sensor (TES) to detect photons at up to 98% efficiency in Giustina et al.’s 2013 experiment39. Photon loss is oftentimes expressed in decibels, dB, through the following conversion:

(47)   \begin{equation*} dB=10\log{(1-\zeta)},\end{equation*}

resulting in a total loss of 0.28 dB being necessary for experiments at 95\,\% detection efficiency to not close the detection loophole. For context, the experiment conducted by Salart et al. had 8 dB of losses between channels with just 10\,\% detection efficiency, a clear violation of the detection loophole. Even the experiment by Giustina et al., which showed a violation of local realism by nearly 70 standard deviations, fails to account for photon loss during propagation.

Definitively closing the loophole would require a noise correction scheme. The use of a heralded amplifier (HA) to eliminate loss as well as a mode of photodetection different from photon number resolving (PNR) could allow for far smaller magnitudes of photon loss. Notably, time-division multiplexing (TDM) methods such as Loop TDM and Balanced TDM have varying probabilities of loss, allowing for the tweaking of experiments such that photons travel only during the time interval in which \zeta is minimized40. The transmission of just half of an entangled state, rather than an entire qubit, through a fibre-optic channel corrected with an HA has been shown to reduce photon loss up to 12.9 dB5; combining this with optimal detection methods and transmission channels would completely solve the detection loophole.

Randomization

The measurement independence loophole is one of the most difficult to close because of its subjectivity; which variables are correlated to one another is oftentimes a matter of perspective, and how intensely an experiment must show freedom of choice is debated amongst physicists. It is clear, however, that current methods of randomizing variables can be improved.

For decades, the primary means of ensuring memoryless and independent measurements has been to rapidly change the detection settings at each measurement region as the entangled particles are being sent from the source. In order to claim that this closes the locality loophole, however, the possible settings must be randomized in a way that they could not have existed in the causal past of each other. Various types of random number generators (RNGs) have thus been used in the generation of measurement settings. The degree of “randomness” is quantified through the entropy of the system, with one possible measure thereof being the Shannon entropy, H(X)41:

(48)   \begin{equation*} H(X)=-\sum_{x\epsilon{X}}P_X(x)\log_2P_X(x),\end{equation*}

where X is the total set of possible values x that the RNG can output. As the amount of possibilities in X increases, the distribution becomes more uniform, lowering the amount of information gained through the knowledge of prior values.

Shannon entropy presupposes that the variables are statistically independent, a flaw that must be accounted for through the selection of them. Classical RNGs determined by computer algorithms, for example, are reliant on deterministic strings of code, undermining Bell inequality violations obtained through their use. To mitigate this, Bell experiments have moved towards quantum phenomena for the generation of random variables, leading to quantum random number generators (QRNGs). Processes such as radioactive decay have been used in QRNGs, with the time taken for a pulse to be emitted from a Geiger-M\”{u}ller (i.e. for either alpha, beta, or gamma decay to occur) being translated to a RNG value. Even this allows for limited prediction of measurement settings, with the probability of decay within a given time interval dt being42:

(49)   \begin{equation*} P(t)dt=\lambda_me^{\lambda_mt}dt,\end{equation*}

where \lambda_m is the decay constant for the radioactive system. Knowledge of \lambda_m reduces the feasible time values to a calculable range, increasing the predictability of a radioactivity-based QRNG. Similar issues plague QRNGs based in other phenomena.\

The use of human choices to generate random variables has yielded promising results. The BIG Bell Test43, using the choices of over 100,000 human participants, obtained a strong inequality violation with a p-value of 10^{-4000}. Despite efforts to incentivize random choice, however, the data showed a slight bias towards choosing basis state |0\rangle over |1\rangle (P(0)\approx0.5327), and a clear tendency to alternate choices (P(01)+P(10)\approx0.6406). For both tendencies, ideal randomness would show a probability of 0.5.

Arguably the most definitive source of measurement independence comes from cosmic random number generators (CRNGs), which use data from particles in outer space to generate random variables. The distance in lightyears between them is used to determine how far in the past they would have had to interact for their states to not be independent. Initial Bell experiments used star luminosity values, limiting the time at which the past light cones of the stars involved could have intersected to over 600 years ago. This is insufficient – the Big Bang is believed to have occurred approximately 13.80 billion years ago, giving local variables far more time to have exploited the locality loophole.

One experiment used the wavelengths of photons from high-redshift quasars (distant, luminous cores of active galaxies) to significantly reduce the possibility of predetermination, ensuring independence for approximately 7.8 billion years44. Alternative sources of random numbers such as cosmic microwave background, the first trace of light released directly after the Big Bang, may further limit the time frame for LHVTs. It may be possible to date quantum mechanics to before the Big Bang itself if we create a way to extrapolate data from gravitational waves and other remnants of cosmic inflation, the fractional period of time before the Big Bang during which the universe expanded faster than the speed of light, into CRNGs.

Loophole-free” Criteria

With all of these factors in mind, it is clear that a truly loophole-free Bell experiment is yet to occur. In fact, it may be impossible to reconcile any experiment with every interpretation of quantum mechanics. Different answers to the measurement problem, alter the experimental processes we must use to account for state collapse; superdeterminism argues that the locality loophole cannot be overcome; and there is no clear pathway towards completely noiseless or perfect particle detection. At a certain point, improvement in our experimental procedures may yield diminishing returns, necessitating a well-defined set of criteria for sufficiently loophole-free” experiments.

A possible set of requirements for a conclusive Bell experiment is the following:

Detection

Detector efficiency and noise correction balanced such that they satisfy Equation 46. This would likely require a minimum \eta of 95\,\%, a lower bound that has already been surpassed, as that would permit a reasonable amount of noise. Regardless, some form of noise correction scheme would still be necessary, which could occur through the limitation of when photons are propagated to ideal time intervals or the use of amplification technology.

The time taken for devices to measure particles after detection must be precisely known (i.e. little margin of error) and sufficiently small such that the possibility of unentangled particles being considered pairs is insignificant.

Locality

RNGs used for measurement setting selection must be based in data sources with minimum possibility of having a causal relationship. Ideally, a CRNG with data from opposite ends of the particle horizon, the furthest possible distance from an earthly observer, would be used. For this to occur, we would need to be able to observe cosmic bodies (likely stars, as their luminosity data is commonly used) upwards of 47 billion light-years away from us or our observing technologies. While such a monumental feat is currently beyond reach (the James Webb Telescope, for example, can see objects approximately 13.7 billion light-years away), it does give us an optimal distance from which to quantify the validity of current and future experiments.

Measurement settings must be rapidly switched during the process of photon propagation and measurement devices must be placed as far apart as possible (ideally in outer space) to ensure no meaningful communication can occur between measurement devices.

The geometry of particle pathways, and the materials used to transport them, must be optimized such that they move as quickly as possible from the source to the measurement device. This could take the form of superconducting circuits for electrons or high-speed channels for photons, and would minimize the possibility of measurement memory affecting the experiment.

Collapse

Experiments must factor in sources of collapse such as gravitation and perception. While it may not be possible to simultaneously close the collapse-locality loophole for all accepted collapse models, major ones may be isolated and subject to the above criteria with the additional factor of collapse time. The setup of Salart et al. accomplishes this for the DP and CSL models granted that better detection methods and noise correction is used, while a larger distance between measurement devices must be used for the GRW model (and an even larger one yet for consciousness-induced collapse).

Some of these parameters can be satisfied far easier than others. The detection loophole has shown the most promising room for complete closure, while experiments will likely approach invalidation of LHVTs throughout cosmic history very gradually. Collapse-locality is as metaphysical a loophole as empirical, and will be debated indefinitely unless the quantum measurement problem itself is solved. In spite of these difficulties, this proposed set of criteria provides a clear way forward for Bell experiments; while they are not truly “loophole-free,” we can standardize to what extent they are and from there determine how far we wish to go in disproving local causality.

Conclusions

Improvements in the field of loophole-free Bell tests were reviewed and mathematically expanded, showing that current experimental evidence largely invalidates LHVTs. Many cases of the detection loophole (different practical setups, heterogeneous detectors, time differentials between pair detection, etc.) have been considered and mathematically formalized, and we have found that existing literature has satisfied them. Spacelike separation has been ensured between measurement regions and memory was proven to be largely inconsequential beyond the (n-1)^{th} particle through a comparison of the speed of information transmission and entangled particles through typical channels. The gravitational collapse-locality loophole provided by the DP model has been shown to be closed. While noise continues to undermine the results of Bell experiments, the improvement of photodetection technology has largely eliminated the need for (albeit desirable) correction schemes; to ensure closure of the detection loophole, all Bell experiments simply need to utilize high-efficiency technology that is already available.

In spite of these accomplishments, several existing flaws in our present methodology have been found. The rate of collapse dictated by parameter \lambda in objective collapse models may be slower than previously thought by a factor of 10^{3} due to photon-eye interactions, leaving the collapse-locality loophole open. Existing methods of randomization have been shown to be thus far insufficient in guaranteeing the independence of variables, with cosmic Bell experiments requiring significant improvement to eliminate the possibility of LHVTs beyond the Big Bang. To date, not a single experiment has synthesized our methods of closing individual loopholes to create a truly loophole-free violation of Bell’s inequality.

Creating such an experiment would involve serious logistical problems: simultaneously separating measurement regions by multiple orders of magnitude more than what has previously been done, observing cosmic data primordial enough to maximally ensure measurement independence, and using high-efficiency detectors and low-noise channels would be a money and resource-intensive task. However, the possible applications of the processes used to close these loopholes in cryptography, quantum gravity, and other fields makes it a task worth pursuing.

References


  1. A. Einstein, B. Podolsky, and N. Rosen, “Can quantum-mechanical description of physical reality be considered complete?”,Phys. Rev. 47, 777–780 (1935). []
  2. J. S. Bell, “On the Einstein Podolsky Rosen paradox”, Physics Physique Fizika 1, 195–200 (1964). [] []
  3. N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, “Bell nonlocality”, Rev. Mod. Phys. 86, 419–478 (2014). [] []
  4. J.-G. Ren, P. Xu, H.-L. Yong, L. Zhang, S.-K. Liao, J. Yin, W.-Y. Liu, W.-Q. Cai, M. Yang, L. Li, K.-X. Yang, X. Han, Y.-Q. Yao, J. Li, H.-Y. Wu, S. Wan, L. Liu, D.-Q. Liu, Y.-W. Kuang, Z.-P. He, P. Shang, C. Guo, R.-H. Zheng, K. Tian, Z.-C. Zhu, N.-L. Liu, C.-Y. Lu, R. Shu, Y.-A. Chen, C.-Z. Peng, J.-Y. Wang, and J.-W. Pan, “Ground-to-satellite quantum teleportation”, Nature 549, 70–73 (2017). [] []
  5. S. Slussarenko, M. M. Weston, L. K. Shalm, V. B. Verma, S.-W. Nam, S. Kocsis, T. C. Ralph, and G. J. Pryde, “Quantum channel correction outperforming direct transmission”, Nature Communications 13, 10.1038/s41467-022-29376-4 (2022). [] []
  6. A. Kent, “Stronger tests of the collapse-locality loophole in Bell experiments”, Phys. Rev. A 101, 012102 (2020). [] []
  7. D. Salart, A. Baas, J. A. W. van Houwelingen, N. Gisin, and H. Zbinden, “Spacelike separation in a Bell test assuming gravitationally induced collapses”, Phys. Rev. Lett. 100, 220404 (2008). [] []
  8. J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, “Proposed experiment to test local hidden-variable theories”, Phys. Rev. Lett. 23, 880–884 (1969). [] []
  9. https://creativecommons.org/licenses/by-sa/3.0/ []
  10. A. Einstein, B. Podolsky, and N. Rosen, “Can quantum-mechanical description of physical reality be considered complete?”, Phys. Rev. 47, 777–780 (1935). []
  11. B. S. Cirel’son, “Quantum generalizations of Bell’s inequality”, \textit{Letters in Mathematical Physics} \textbf{4}, 10.1007/BF00417500 (1980). []
  12. P. M. Pearle, “Hidden-variable example based upon data rejection”, \textit{Phys. Rev. D} \textbf{2}, 1418–1425 (1970). []
  13. N. D. Mermin, “The EPR Experiment—Thoughts about the ‘Loophole’”, in \textit{New Techniques and Ideas in Quantum Measurement Theory}, Vol. 480, edited by D. M. Greenberger (New York Academy of Sciences, 1986), p. 422. []
  14. P. H. Eberhard, “Background level and counter efficiencies required for a loophole-free-Einstein-Podolsky-Rosen experiment”, Phys. Rev. A 47, R747–R750 (1993). []
  15. S. Massar and S. Pironio, “Violation of local realism versus detection efficiency”, Phys. Rev. A 68, 062109 (2003). [] []
  16. J. F. Clauser and M. A. Horne, “Experimental consequences of objective local theories”, Phys. Rev. D 10, 526–535 (1974). []
  17. A. Cabello and J.-Å. Larsson, “Minimum detection efficiency for a loophole-free atom-photon Bell experiment”, \textit{Phys. Rev. Lett.} \textbf{98}, 220402 (2007). []
  18. J.-Å. Larsson and R. D. Gill, “Bell’s inequality and the coincidence-time loophole”, Europhysics Letters 67, 707 (2004). []
  19. J. Barrett, D. Collins, L. Hardy, A. Kent, and S. Popescu, “Quantum nonlocality, Bell inequalities, and the memory loophole”, Phys. Rev. A 66, 042111 (2002). [] [] []
  20. D. Collins, N. Gisin, N. Linden, S. Massar, and S. Popescu, “Bell inequalities for arbitrarily high-dimensional systems”, \textit{Phys. Rev. Lett.} \textbf{88}, 040404 (2002). []
  21. A. Kent, “Causal quantum theory and the collapse locality loophole”, \textit{Phys. Rev. A} \textbf{72}, 012107 (2005). []
  22. M. O. Scully, “The time dependent Schrodinger equation revisited I: quantum field and classical Hamilton-Jacobi routes to Schrodinger’s wave equation”, Journal of Physics: Conference Series 99, 012019 (2008). []
  23. A. Kent, “Stronger tests of the collapse-locality loophole in Bell experiments”, \textit{Phys. Rev. A} 101, 012102 (2020). []
  24. L. Di\'{o}si, “A universal master equation for the gravitational violation of quantum mechanics”, Physics Letters A 120, 377–381 (1987). []
  25. R. Penrose, “On Gravity’s role in Quantum State Reduction”, General Relativity and Gravitation} \textbf{28}, 581–600 (1996). []
  26. W. Heisenberg, “\”{U}ber den anschaulichen inhalt der quantentheoretischen kinematik und mechanik”, \textit{Zeitschrift f\”{u}r Physik} \textbf{43}, 10.1007/BF01397280 (1927). []
  27. E. P. Wigner, “On the time—energy uncertainty relation”, in \textit{Special Relativity and Quantum Theory: A Collection of Papers on the Poincar\'{e} Group}, edited by M. E. Noz and Y. S. Kim (Springer Netherlands, Dordrecht, 1988), pp. 199–209. []
  28. R. Penrose, \textit{The Road to Reality: A Complete Guide to the Laws of the Universe} (Random House, London, 2005). []
  29. R. Penrose, The Road to Reality: A Complete Guide to the Laws of the Universe (Random House, London, 2005). []
  30. G. Ghirardi, A. Rimini, and T. Weber, “Unified dynamics for microscopic and macroscopic systems”, Phys. Rev. D 34, 470–491 (1986). []
  31. G. Ghirardi, P. M. Pearle, and A. Rimini, “Markov processes in Hilbert space and continuous spontaneous localization of systems of identical particles”, \textit{Phys. Rev. A} \textbf{42}, 78–89 (1990). []
  32. S. L. Adler, “Lower and upper bounds on CSL parameters from latent image formation and IGM heating”, Journal of Physics A: Mathematical and Theoretical 40, 2935 (2007). []
  33. P. M. Pearle, “Combining stochastic dynamical state-vector reduction with spontaneous localization”, Phys. Rev. A 39, 2277–2289 (1989). []
  34. P. M. Pearle, Introduction to Dynamical Wave Function Collapse: Realism in Quantum Physics: Volume 1 (Oxford University Press, Jan. 2024). []
  35. J. von Neumann and R. T. Beyer, \textit{Mathematical Foundations of Quantum Mechanics: New Edition}, NED – New Edition (Princeton University Press, 2018). []
  36. A. Bassi, D.-A. Deckert, and L. Ferialdi, “Breaking quantum linearity: constraints from human perception and cosmological implications”, Europhysics Letters 92, 50006 (2010). []
  37. F. Aicardi, A. Borsellino, G. Ghirardi, and R. Grassi, “Dynamical models for state-vector reduction: do they ensure that measurements have outcomes?”, Foundations of Physics Letters 4, 109–128 (1991). []
  38. B. G. Christensen, K. T. McCusker, J. B. Altepeter, B. Calkins, T. Gerrits, A. E. Lita, A. Miller, L. K. Shalm, Y. Zhang, S. W. Nam, N. Brunner, C. C. W. Lim, N. Gisin, and P. G. Kwiat, “Detection-loophole-free test of quantum nonlocality, and applications”, Phys. Rev. Lett. 111}, 130406 (2013). []
  39. M. Giustina, A. Mech, S. Ramelow, B. Wittmann, J. Kofler, J. Beyer, A. Lita, B. Calkins, T. Gerrits, S. W. Nam, R. Ursin, and A. Zeilinger, “Bell violation using entangled photons without the fair-sampling assumption”, Nature 497, 227–230 (2013). [] []
  40. P. P. Rohde, J. G. Webb, E. H. Huntington, and T. C. Ralph, “Photon number projection using non-number-resolving detectors”, New Journal of Physics 9, 233 (2007). []
  41. C. E. Shannon, “A mathematical theory of communication”, The Bell System Technical Journal 27, 379–423 (1948). []
  42. V. Mannalatha, S. Mishra, and A. Pathak, “A comprehensive review of quantum random number generators: concepts, classification and the origin of randomness”, Quantum Information Processing 22, 10.1007/s11128-023-04175-y (2023). []
  43. C. Abell\’an, A. Ac\’in, A. Alarc\’on, O. Alibart, C. K. Andersen, F. Andreoli, A. Beckert, F. A. Beduini, A. Bendersky, M. Bentivegna, P. Bierhorst, D. Burchardt, A. Cabello, J. Cari~ne, S. Carrasco, G. Carvacho, D. Cavalcanti, R. Chaves, J. Cort\’es-Vega, A. Cuevas, A. Delgado, H. de Riedmatten, C. Eichler, P. Farrera, J. Fuenzalida, M. Garc\’ia-Matos, R. Garthoff, S. Gasparinetti, T. Gerrits, F. Ghafari Jouneghani, S. Glancy, E. S. G\’omez, P. Gonz\’alez, J.-Y. Guan, J. Handsteiner, J. Heinsoo, G. Heinze, A. Hirschmann, O. Jim\’enez, F. Kaiser, E. Knill, L. T. Knoll, S. Krinner, P. Kurpiers, M. A. Larotonda, J.-Å. Larsson, A. Lenhard, H. Li, M.-H. Li, G. Lima, B. Liu, Y. Liu, I. H. L\’opez Grande, T. Lunghi, X. Ma, O. S. Maga~na-Loaiza, P. Magnard, A. Magnoni, M. Mart\’i-Prieto, D. Mart\’inez, P. Mataloni, A. Mattar, M. Mazzera, R. P. Mirin, M. W. Mitchell, S. Nam, M. Oppliger, J.-W. Pan, R. B. Patel, G. J. Pryde, D. Rauch, K. Redeker, D. Riel\”ander, M. Ringbauer, T. Roberson, W. Rosenfeld, Y. Salath\’e, L. Santodonato, G. Sauder, T. Scheidl, C. T. Schmiegelow, F. Sciarrino, A. Seri, L. K. Shalm, S.-C. Shi, S. Slussarenko, M. J. Stevens, S. Tanzilli, F. Toledo, J. Tura, R. Ursin, P. Vergyris, V. B. Verma, T. Walter, A. Wallraff, Z. Wang, H. Weinfurter, M. M. Weston, A. G. White, C. Wu, G. B. Xavier, L. You, X. Yuan, A. Zeilinger, Q. Zhang, W. Zhang, J. Zhong, and T. B. B. T. Collaboration, “Challenging local realism with human choices”, Nature 557, 212–216 (2018). []
  44. D. Rauch, J. Handsteiner, A. Hochrainer, J. Gallicchio, A. S. Friedman, C. Leung, B. Liu,
    L. Bulla, S. Ecker, F. Steinlechner, R. Ursin, B. Hu, D. Leon, C. Benn, A. Ghedina, M. Cecconi, A. H. Guth, D. I. Kaiser, T. Scheidl, and A. Zeilinger, “Cosmic bell test using random measurement settings from high-redshift quasars”, Phys. Rev. Lett. 121, 080403 (2018). []

LEAVE A REPLY

Please enter your comment!
Please enter your name here