# Loss-Tolerant Tests Of Einstein-Podolsky-Rosen-Steering

###### Abstract

We analyse two classes of Einstein-Podolsky-Rosen (EPR)-steering inequalities, the violation of which can be used to demonstrate EPR-steering with an entangled two-qubit Werner state: linear inequalities and quadratic inequalities. We discuss how post-selection of results (by appeal to the fair sampling assumption) can compromise the rigour of these inequalities in experimental tests of EPR-steering. By considering the worst-case scenarios in which detector inefficiency or other loss could be exploited within a local hidden-state model, we derive inequalities that enable rigorous but loss-tolerant demonstrations of EPR-steering. The linear inequalities, and special cases of the quadratic inequalities, have been used in recent experiments. Our results indicate that regardless of the number of settings used, quadratic inequalities are never better, and often worse, than linear inequalities.

###### pacs:

03.65.Ud, 03.67.Mn, 42.50.Xa## I Introduction

The nonlocality of entangled quantum states is one of the most significant, and most heavily debated, features of quantum mechanics. It was an apparent action-at-a-distance which was Einstein, Podolsky, and Rosen’s basis for challenging the completeness of quantum mechanics in their historic paper key-1 (). There they demonstrated, starting from a set of assumptions now known as “local realism” despag () or “local causality” locaus (), a contradiction between the quantum predictions for certain correlations between entangled states and the assumption that quantum mechanics provides a complete description of reality.

Schrödinger Schrod () noticed the strangeness of the nonlocal effect uncovered by EPR, which he called “steering,” but believed that such correlations would not be observable between distant objects. In 1964 Bell Bell () showed that EPR’s assumption of local causality was itself in contradiction with the predictions of quantum mechanics, and that no local hidden variable (LHV) model of the kind envisaged by EPR could explain all correlations between entangled states. Those correlations (i.e., the violation of Bell inequalities) have been since then observed Aspect (); Giustina (), up to some experimental loopholes Garg (); Cyril ().

Schrödinger Schrod () also coined the term “entanglement,” but the concept was not defined for general (mixed) states until 1989, by Werner Werner (). In that paper, Werner also showed that not all entangled states violate a Bell inequality, thus demonstrating that Bell nonlocality is not synonymous with entanglement ^{1}^{1}1Werner actually used the term “EPR correlated” instead of “entangled”, but as we’ll see below, EPR correlations are not synonymous with entanglement either..

Inequalities to demonstrate the correlations of the EPR paradox have been proposed, also in 1989, by Reid and co-workers Reid1989 (); Reid2009 (). However, the correlations discussed by EPR were formally defined as a general class only in 2007, by Wiseman and co-workers key-3 (); key-4 (). Adopting Schrödinger’s term for the EPR correlations, they showed that not all entangled states display steering, and not all states that demonstrate steering violate Bell inequalities. To avoid ambiguity in terminology, this effect is now usually labelled EPR-steering key-5 (); Saunders (); Bennett (); Parsimonious (); Vallone ().

A formal definition of EPR-steering allows mathematically rigorous tests to be devised key-5 (), which are typically formulated as inequalities. Experimental results which violate these inequalities should then demonstrate EPR-steering (much as Bell nonlocality is demonstrated by the violation of Bell inequalities). However, as is the case with Bell inequalities, there are gaps to be bridged between mathematical and experimental rigour. Use of the fair sampling assumption results in one such loophole through which experimental imperfections may compromise the rigour in tests of EPR-steering Garg (); Bennett ().

In this paper, we will describe two different kinds of EPR-steering criteria – linear and nonlinear – which address the inefficient detection loophole, each with a varying number of qubit spin measurements (regularly spaced about the Bloch sphere), from 2 to 10. From these criteria, we calculate loss-tolerant EPR-steering inequalities that close the detection loophole, and we then compare the strength of the various tests. The linear inequalities that we derive are the same as those used in Ref. Bennett (), and special cases of the nonlinear inequalities that we derive were used in Refs. Vienna (); UQ (). We derive lower bounds on the efficiency of detectors that can be used to rigorously demonstrate EPR-steering. It will be observed that in the great majority of cases, using more measurements makes our tests of EPR-steering more loss-tolerant. It will also be observed that the nonlinear inequalities we consider offer no advantage over the linear inequalities. All of our inequalities are optimized for Werner states (that is, depolarized versions of maximally entangled states); loss-tolerant EPR-steering inequalities for non-maximally entangled states are considered in Ref. Vallone ().

The remainder of this paper is organised as follows. We begin Sec. II by describing EPR-steering, and its operational definition. We introduce two different EPR-steering criteria in Sec. III, and discuss Bob’s optimal measurement strategies for using these criteria. We then address, in Sec. IV, the ways in which Alice and Bob can deal with the possibility of inefficient detectors. Following from this discussion, we derive loss-tolerant bounds on both EPR-steering inequalities in Sec. V. Finally, we compare how well these criteria perform for demonstrations of EPR-steering.

## Ii Definition of EPR-steering

A demonstration of EPR-steering is a task involving two parties, Alice and Bob, who share a pair of quantum systems that may or may not be entangled (in this paper, we restrict our considerations to qubits). The task is for Alice to convince Bob that they share entanglement. However, while Bob trusts his own apparatus in that he trusts that his measurements are described by the appropriate quantum mechanical operator on a qubit, he does not trust Alice (or her apparatus) in the same way.

The task proceeds as follows: Upon receiving a state, Bob tells Alice which measurement to make, and performs the same measurement on his own state (this is optimal for verifying singlet-like entanglement, which is typically the case of interest key-3 (); key-4 (); key-5 (); Saunders (); Parsimonious (); Vallone (); Bennett (); Vienna (); UQ (); preparation ()). We assume that Bob’s equipment is trustworthy, and therefore is actually performing measurements on a quantum state to generate his results. However, Bob does not trust that Alice is generating her results by performing measurements on a state that is entangled with his, but rather allows for the possibility that his state could be locally predetermined by some variables which Alice may have access to. This would amount to a local hidden state key-4 () (LHS) model for Bob’s system, where Bob’s results are derived from some quantum state, but where no such assumption is made about Alice’s results, which may be determined by some hidden variable (which may be classically correlated with Bob’s state). Thus, if the experimental statistics cannot be described by a LHS model, then Bob will be convinced that Alice can steer his state, and thus that they share entanglement. As shown in Ref. key-5 (), the assumption of a LHS model leads to certain EPR-steering inequalities, the violation of which indicate the occurrence of EPR-steering.

Without being able to nonlocally influence Bob’s state (i.e., in a LHS model), Alice’s ability to manipulate Bob’s measurements is maximised if she can control what pure state Bob will be given. Thus, we will assume that she has the ability to do so. In the operational scheme we are considering, Alice knows which measurements Bob will make, but she does not know which one he will choose in any given run of the experiment. The causal separation of Alice and Bob is embodied in the assertion that Bob selects his measurements in such a way that Alice must decide what state to send Bob before she knows what measurement he will perform on it.

## Iii EPR-Steering Inequalities

The two kinds of EPR-steering criteria in this paper are both additive convex criteria, employed in the manner described by Ref. key-5 (). The first will be a linear correlation function between Bob’s measured results and Alice’s claimed results. The second will be based on Bob’s inference variance Reid1989 (), i.e., the conditional variance of Bob’s measurement results, given Alice’s.

The entangled states that we will consider are Werner states, i.e. states of form

(1) |

where is the spin singlet state , and the purity parameter is constrained such that . The superscript denotes a feature of Alice’s system, and denotes a feature of Bob’s. The experiments to which our work applies Saunders (); Bennett (); Vienna (); UQ () were done using entangled pairs of photon polarisation qubits, but we will use the terminology of spin.

### iii.1 Additive correlation bound

The first EPR-steering criterion that we will use is the average (over the measurement settings) of the expectation value of the correlation function of Alice and Bob’s spin measurements, which we will label . It can be defined generally (i.e., without assuming an honest Alice) from Alice’s expectation value of Bob’s result, , based on her own result, :

(2) |

Here the index references a measurement setting from a set of different orientations along each of which Alice and Bob must measure. Bob must randomly choose between these measurements, and since they are in separate labs, Alice does not know which measurement will be performed on each state she sends to Bob. It is this feature which will lead to the limits on in a no-steering model. The symbol denotes the ensemble average over Alice’s result .

For an honest Alice (in a quantum mechanical model), Eq. (2) can be expressed as

(3) |

where is the reduced state of Bob’s qubit, conditioned on outcome for Alice’s measurement of . Correspondingly, shall represent the result of Bob’s measurement of , when necessary. In a quantum mechanical model where Alice is honest, and Alice and Bob do share a Werner state as in Eq. (1), the correlation function yields a result

(4) |

It should be noted that for individual measurements of a Pauli operator, the results of either party’s measurements will always be either , or . However, if Alice is cheating, there is no such constraint on what her measurements can be. If Alice is simply making up her results, she can choose any value she fancies, but if she submits to Bob any measurement other than , he will recognise that her result was clearly not obtained from any spin measurement. Therefore, if an untrustworthy Alice is to convince Bob of EPR-steering, she must still constrain her results to .

For an untrustworthy Alice who does not share entanglement with Bob, the experimental results must be describable by a LHS model and thus take the form

(5) |

where is the state which Alice prepares for Bob, with probability , and where is the result that Alice will declare for each value of and . The choice of which state to send to Bob and the choice of can be said to have a common dependence on some variable, or set of variables, which we label . Alice’s cheating strategy will thus consist of specifying these variables and dependencies so as to maximise .

The assumption of a local hidden state model in Eq. (5) means that there exists a bound upon Eq. (5) that is not present upon Eq. (3). The general proof for linear EPR-steering criteria can be found in Ref. key-5 () and does not need to be reproduced here – the relevant mathematical implication is that the (achievable) upper bound on the expectation value of an operator is the largest eigenvalue of that operator. Using this property, we can calculate the largest possible value of that can be obtained in a no-steering model. This means that for this case, where Alice reports a result in every run, Alice can maximize the correlation function by sending Bob an identical LHS for every run in an experiment. (The non-deterministic cases where she must choose different states will be dealt with in Sec. IV.) Thus Alice’s optimal choice of will be such that makes as large as possible. To obtain the maximum value of with this , Alice must also use an optimal set of values. Such a set is easily found by requiring when and when . Regardless of the values that a cheating Alice may submit, it will hold that

(6) |

where denotes the maximum eigenvalue of the following operator. Note that we can drop the dependence of when performing the maximisation, and in future we will often use to denote the random variables that may actually depend upon . Thus we have derived an EPR-steering inequality

(7) |

the violation of which would demonstrate EPR-steering. The value of thus depends only on the set of measurements . We will specify below that each set of measurements is uniquely characterised (among our five different measurement sets) by the number of measurements in it, so it is sufficiently distinctive to write that depends only on . Note that in Eq. (7) we have ignored the minus sign in the maximisation of eigenvalues as the eigenvalues of every possible will come in pairs of real numbers that are of the same magnitude, but opposite sign.

### iii.2 Additive inference variance bound

Our second EPR-steering criterion is based on the variance of Bob’s measurements,

The sum of Bob’s variances over all measurements would be

so it would be more notationally convenient to simply use a form of the second term in this expression. Labelling it as and including a normalisation factor of , we will concern ourselves with

(8) |

which is another expression that satisfies the additive convex criteria described in Ref. key-5 (). (It should be noted that this function is strictly convex because all values are necessarily real numbers – if they were not, then would not necessarily be a convex function.) The way in which this can be used as an EPR-steering criteria is if we consider how well Alice can estimate the expectation value of , conditioned upon her measurement result, . Thus, our EPR-steering function will be

(9) |

where, as above, represents Alice’s ensemble average. A sufficiently large value for this parameter means Alice has a sufficiently small inference variance for a noncommuting set of Bob’s observable that this knowledge can only be explained by the EPR-steering phenomenon.

Estimates of Bob’s results can be inferred from Alice’s results. For an honest Alice, the accuracy of such inferred results would be dependent on the degree of entanglement present. In the case of a completely entangled state (i.e., ), Bob’s results could be determined from Alice’s with perfect accuracy.

If Alice and Bob genuinely do share a Werner state as in Eq. (1), and Alice performs upon it a measurement in a direction , then the conditioned state on Bob’s side will be . Here, is a pure state with unit-length Bloch vector , representing the spin orientation (see also Sec. III C below). The expectation value of a measurement on a pure state is , where is the vector representing the orientation of the measurement . In the general case, Bob’s expectation value is given by

which obviously gives when . In a quantum mechanical model, Eq. (9) can be calculated as

(10) |

Thus, when an honest Alice is directed by Bob to measure in directions aligned (or anti-aligned) to his own, we should obtain, for all -values,

(11) |

On the other hand, if Alice is dishonest, and does not share entanglement with Bob, her results will be determined by a LHS model, and will be calculated as

(12) |

In the absence of any entanglement, the accuracy with which Alice can infer Bob’s results is dependent on how well Alice can predict the results of Bob’s measurements upon his LHS, . Therefore, Alice’s optimal choice of LHS for Bob will be a state which minimises the summed variance of Bob’s results (thus maximising ).

Just as with , the assumption of a LHS model in Eq. (12) means that there is a bound upon Eq. (12) that does not exist for Eq. (10). The bound on this function can be derived from the bound on the average expectation value of . If Alice sends Bob a pure state , we will have

(13) | |||||

where we have labelled the matrix as , and have used the fact that for pure states. Using this inequality, the bound on in a no-steering (LHS) model can be seen to be

(14) |

This bound is derived for the case that a dishonest Alice uses only one state, , to repeatedly send to Bob, but it is clear that it still holds for the case of Alice sending multiple states (so long as Bob always chooses his measurements in an order that is unpredictable by Alice) by the convexity of the function key-5 (). That is, if we consider a mixed state, like , , where and are probabilities such that . Thus Eq. (14) is an EPR-steering inequality.

### iii.3 Bob’s measurement scheme

In choosing a measurement scheme, one must consider the ways in which measurement arrangements could be exploited by a cheating Alice. The bounds that we will derive are bounds on the average values of certain results, and thus, to experimentally determine these averages, the same measurements must be made many times, by both parties. So it is fair to say that Bob cannot realistically keep Alice from knowing the set of measurements he is to make. However, it is possible for Bob to prevent Alice from predicting the ordering of each measurement by randomising his choice of measurement in each run of the experiment ^{2}^{2}2This assumes that Alice does not have access to Bob’s random number generator, which is addressed in other works as the Free Will assumption Bell ()..

For the EPR-steering functions introduced above we can assume that Alice sends Bob the same state in every run. This is because if Alice does not know the order in which Bob makes his measurements, each state she uses is equally likely to face any one of Bob’s measurements. Therefore, her best option is to use the one state that has the highest average correlation function over all of Bob’s measurements. If there exists more than one state which fulfils this requirement, then Alice may just as easily use a combination of them, but cannot possibly attain a maximum correlation function any higher than is attainable by using just one such state. Thus, in obtaining the maximal bound or , there is never any advantage for Alice to use more than one state per experiment.

Conversely, it is a good strategy for Bob to choose as many different measurements as possible (a large value). Indeed, if Bob chooses just one measurement orientation, then Alice will be able to align the spin axis of his state with that measurement axis every time. In that case, Alice could perfectly imitate the results of entanglement, obtaining or, equally, .

For similar reasons, it is clear that if Bob chooses measurements that are close to each other on the Bloch sphere, it will lead to a higher or than if he chose measurements farther away from each other. This is bad for an honest Alice, as it means she would have to use a Werner state with higher purity to beat that bound, which is not what Bob wants. Therefore it would seem likely that a good choice for Bob would be to make his measurements regularly spaced on the Bloch sphere. For this reason, we will follow Refs. Saunders (); Bennett () in choosing sets of measurement orientations that are related to the vertices of the three-dimensional Platonic solids.

There are only five three-dimensional Platonic solids: those with 4, 6, 8, 12, and 20 vertices. It is important to note that every Platonic solid except that with four vertices (the tetrahedron) has vertices such that every vertex has a diametric opposite – an antipode – as some other vertex on the solid. This property is essential if we are to associate vertices with qubit observables, as every operator has two eigenvectors which are antipodal on the Bloch scheme. Thus we will characterise each set of measurements by the number of measurement axes, which will be half of the number of vertices – and will be denoted . However, since the tetrahedron possesses four vertices, none of which are antipodes, we cannot use it to define an set of measurement axes (we could use the four different directions to define an set, but that would duplicate the cube). But we can still use another shape for : the square – a shape with four vertices, each of which is an antipode of one other vertex, and which are all regularly spaced, albeit in two dimensions. Thus, our measurement schemes involve Bob’s chosen sets of measurements being of size , 3, 4, 6, or 10.

As referenced above, these Platonic solid measurement schemes have been previously employed, with the bound having been calculated in those publications Saunders (); Bennett () for , 3, 4, 6, and 10. In those papers, it was calculated that (when Alice reports results with perfect efficiency) the value of was a monotonically decreasing function of (and strictly decreasing for every point except for ). Specifically, it was found that , , , and .

However, the additive inference variance bound, , has not been previously calculated for this scheme. In calculating the bound from Eq. (14), we find that and , but also that . Although as with , does not decrease with beyond . This happens because each measurement set for forms a spherical 2-design Hong:1982 (), which, for the form of , means that each bound for requires an equally high -value to violate.

A simplified explanation for this can be drawn from the observation that for , we use measurement vectors , , and thus . The same behaviour, but in three dimensions, occurs for the measurement vectors. The same pattern obviously does not hold for the rest of the measurement sets, since they are all in three spatial dimensions, and none of the other matrices are equal to the identity, but the rest of the measurement sets are spherical 2-designs, meaning that for every other -value, . For this reason, each of these measurement sets exhibit the same behaviour in a test of EPR-steering. However, this is not the case for matrices constructed from only a portion of the values in some set, a fact which will become significant in Sec. V.

### iii.4 Alice’s cheating strategies

Having considered Bob’s measurement strategies, we must also consider the optimal ways in which a dishonest Alice can attempt to exploit them. It is from just such an analysis that we can determine whether the EPR-steering bounds derived above are tight. That is, we will determine whether Alice can saturate the bound by sending Bob a state (i.e. a LHS) drawn from some ensemble over .

Intuitively, the heart of Alice’s cheating strategies in the situations we consider is to orient Bob’s state as closely as possible to as many of his measurements as possible. This is because for maximising or , Alice needs Bob’s expectation values to be as large as possible.

This is easily shown for , whose maximal bounds are calculated from , into which we shall substitute , where is the vector orientation of , with . We will use the relation that for a general operator on a state , , with the maximum being obtained by a pure state. Defining the state of Bob’s system as , a pure state aligned with orientation , we find that

(15) | |||||

which is obtained only when is parallel to . This proof also shows that the bound derived in Eq. (7) is indeed a tight (attainable) bound.

Note that the maximal value of (the EPR-steering bound) is derived from a maximisation of both and , and the above maximisation, , is only for the maximal value of that is attainable with a given . Conversely, it would be even simpler for a cheating Alice to optimise to obtain the maximal for a given . Knowing the orientation of Bob’s state, Alice can easily deduce that for every measurement within radians of , Bob’s average result will be positive, and for every measurement more than radians from , Bob’s average result will be negative. To drive towards , Alice should choose when , and when .

Clearly, the optimal values of corresponding to any are easily calculable, and the optimal orientation of is, almost as easily calculable for any permutation. However, can take any orientation on the Bloch sphere, whereas there are a finite number of permutations. Therefore, a computational search for optimal cheating strategies is most efficient when calculating the optimal for every , just as denoted in Eq. (7).

Notably, when calculating the optimal cheating ensembles for each Platonic solid, it is found that the optimal orientations of are always either face-centred, or vertex-centred. This is intuitively understandable by observing the symmetries of the Platonic solids.

For the and solids, is equal to the number of corners on each face, and a point as close as possible to all corners of a shape will be equidistant from them. For and , it is easy to see that a point equidistant from all vertices will be face-centred (the same logic holds for , whose shape is a square, and its optimal ensembles being defined by the point in the middle of the line on any edge). However, this does not hold for the and solids, whose faces have corners, and thus for and , there is no point that can be equidistant from all vertices. But intuitively, a point in the centre of vertices will still be an optimal orientation if those vertices are as close as possible. It turns out that the best arrangement of vertices for and will always be centred upon one of those vertices.

The reader may have deduced that this reasoning does not lead to just one optimal ensemble for any measurement set. Indeed, as shown in Fig. 1, there are multiple equally optimal cheating ensembles for each shape, their multiplicity equalling the number of the solids’ faces for and , and the number of the solids’ vertices for and .

However, the above ensembles do not strictly describe the optimal cheating strategies for . The proof that the bounds on are also tight is somewhat different. The actual values of Alice’s results (if choosing between and ) are of no importance in any cheating strategy for maximising , as long as Alice chooses deterministically for any given . Recall that the bounds are defined as the maximal eigenvalues of , and any eigenvectors that attain these eigenvalues are used to define , the optimal orientations of Bob’s state. Since for the Platonic solids, any vector is an eigenvector and hence any pure state is equally optimal for .

The and functions we have discussed so far are those derived under the assumption that Alice submits results in every run of the experiment. We now turn to the issue of dealing with loss.

## Iv Dealing with Loss

In all of the above analysis, the only experimental imperfection we discussed was that the entangled state may not be pure. Typically a much greater concern, with photons at least, is inefficient detectors and other types of loss. The focus of this paper is to derive rigorous EPR-steering inequalities which are robust in the presence of the inevitable experimental detection inefficiencies and other losses. We discuss four ways of dealing with loss, which we identify as denial, anger, depression, and hope.

### iv.1 Dealing with loss: Denial

In most experiments involving photon polarisation, one has a very good knowledge of the system under investigation, and of the mechanism of the detector. This makes it possible to draw reasonable conclusions about which results are more likely to have been “omitted” when a photon is not detected. Indeed, by careful construction it is possible to ensure that the probability of omission is independent of what the result “would have been.”

Thus, to draw conclusions about the behaviour of a system under study with imperfect detectors, it is common (and, a priori, reasonable) to post-select one’s results such that all null results are ignored, since null results are generally caused by properties of one’s detectors, and not the system under study. Properly applied, this leads to post-selected ensemble measurements more accurately reflecting the properties of the initial ensemble. However, because omitted results, by definition, cannot be known, this conclusion cannot be proven. Thus, it is referred to as the fair sampling assumption (FSA). It is still a useful assumption, as it is based on accepted physical principles. For experiments that involve calculating probability distributions or averages (as very many experiments on quantum properties do), assuming that the omitted results obey a known probability distribution is often essential for obtaining meaningful results. This has been employed in a number of papers on tests of EPR-steering Saunders (); Parsimonious (), where closure of the detection loophole was not a concern.

However, the FSA is an assumption based on the principles of quantum mechanics, and therefore should not be applied as a part of experiments intended to test the validity of quantum mechanics itself. Therefore, to apply this kind of post-selection to the results of an EPR-steering experiment would compromise its rigour. Similarly, if we are using EPR-steering to prove the existence of entanglement when one party (Alice) is genuinely untrustworthy, then by using the FSA we are in denial about the problem of loss, as we will now show.

Making the fair sampling assumption with a potentially dishonest Alice allows a cheating Alice to violate EPR-steering bounds that incorporate the FSA. This can be seen by reconsidering the expression for , Eq. (2),

(16) |

and for its corresponding bound , Eq. (7),

(17) |

If Alice claims not to have a perfect detector, she has the option of claiming that she did not receive a result from her detector on certain measurements. In these cases, Bob (who, for now, we assume has a perfect detector) will discard Alice’s null results when calculating if he is making the FSA. This post-selection of Alice’s results rules out a rigorous test of EPR-steering. To illustrate this, consider a result of Eq. (16) for any case in which are different for (at least some) different values of . If Alice chooses to report null results for the lowest-valued , and Bob post-selects out these null results, will be higher than if Alice submitted results with perfect efficiency. The same goes for the cases of Alice submitting nulls for any number of measurements whose expectation values are less than the maximum value. Indeed, if equals unity for some (i.e., the state Alice sends is aligned with one of Bob’s measurement axes ), then Alice could feign EPR-steering arbitrarily well, by omitting all those results except those for which Bob’s measurement is aligned with , when Bob is using a FSA. Thus, a FSA would be of benefit to an honest Alice, but would enable a dishonest Alice to cheat. These comments apply equally to the nonlinear inequality [Eq. (14)] as well.

### iv.2 Dealing with loss: Anger

The inequalities derived above do not accommodate the possibility of Alice having an imperfect detector, and submitting any null results. One way to keep these inequalities is for Bob to require a result of Alice on every measurement – Alice is not allowed to submit any nulls. If Alice is cheating, this will mean that her best option is to submit results according to her optimal strategies for perfect efficiency. If she tries to claim a null, and Bob demands a result nonetheless, a cheating Alice still has a foreknowledge of Bob’s state that allows her to calculate an expectation value for , and make just as good an estimate of Bob’s result (and thus, what result she should choose to maximise their correlations). On the other hand, if Alice is not cheating, she never has any knowledge of Bob’s state except for that obtained from her own measurement result. So if she receives a null result, and Bob demands one of her anyway, she will have no way of estimating results that optimise the correlation function. Therefore, on average, she might as well choose half of the time, and half of the time. We imagine that being forced to make up random results would make an honest Alice angry, hence our name for this approach. If Bob chooses to do this, it will restore the rigour of the EPR-steering test, but it will mean that for an honest Alice with detector efficiency , demonstration of EPR-steering will require or . This can be seen from separately calculating the contributions to and from when Alice does and does not receive a result, as we now show.

For an honest Alice with an efficiency of , we can write

(18) | |||||

We can calculate the probability of Bob’s results conditioned upon Alice’s results by separating the cases where Alice does and does not register a detection. We distinguish these cases as and , respectively, and use

We can calculate the values of this expresssion, knowing that ; and that and . From the nature of our entangled states, we can predict for , and as mentioned above, may as well be for also. We also know that , and , and from these values, we can calculate that the value of above is equal to . Since , we do not bother distinguishing between those cases in calculating the prefactors, upon which we can find that for our honest Alice. For the case of , this agrees with the previously calculated result of Eq. (4).

By contrast, a dishonest Alice will always have some control over the value of , and in general can make it such that whether she claims or . Thus, she can submit s such that always attains its maximum value of . Therefore, to demonstrate EPR-steering under these conditions, an honest Alice would need and such that .

For the bounds on , we repeat this working:

Using the statistics given above, the value of here can be calculated to be , as above, and the value of obtained by an honest Alice under the restriction that will be . Again, for the same reasons as mentioned above, a cheating Alice will be able to attain the EPR-steering bound, , on this criteria as well. Therefore, to demonstrate EPR-steering with this criteria (and the restriction that ), an honest Alice would need and such that .

Figure 2 displays the values of required to violate the and bounds under this “anger” strategy, as solid lines. As mentioned in the previous section, the bounds are all equal for , and this attribute remains for those bounds in Fig. 2. One should also note that for (and as well), and thus, the values of and required to violate the and bounds are, respectively, equal to those required to violate the and bounds. The nonlinear tests necessarily fail for . The linear tests necessarily fail for , because it is known that key-3 ().

### iv.3 Dealing with loss: Depression

An alternate approach which also maintains the rigour of the EPR-steering tests is to allow Alice to submit null results, but to incorporate these results into the EPR-steering function, as a zero value of any EPR-steering function that an honest Alice is contributing to (compared with Alice choosing equally randomly, as with the “anger” strategy above).

However, even though an honest Alice can submit null results, because this strategy still does not entail any post-selection, her results would still have a reduced (-dependent) correlation with Bob’s results. We would expect an honest Alice to be depressed by this realisation, hence our name for this strategy. This reduction can be calculated just as above, with Eqs. (18) and (IV.2), by allowing Alice’s null results to be represented by , for mathematical convenience (the additive inference variance criteria has previously been used with this approach for and Vienna (); UQ ()).

Note that this does not require any alterations to, or reformulations of our EPR-steering criteria, as their derivations hold for any values of satisfying . Prior to now, we have restricted to because a cheating Alice who does not want to get caught would do the same.

For in this case, an honest Alice has slightly different measurement statistics to those used above – specifically, now that null results are an option, it is no longer the case that , since an honest Alice would choose , and therefore . This difference means that the average value of in this case will be , rather than . However, the prefactor is no longer for , but is now for . From this, we regain an factor, resulting in the value for an honest Alice being again, just as for the “anger” strategy. An Alice using the linear inequality would indeed be depressed by this result.

In calculating , however, the “depression” strategy actually gives a different, and better, correlation than the “anger” strategy. We again calculate to be instead of , but now our recalculation of Eq. (IV.2) gives , with the factor coming from the prefactor, for an honest Alice.

Thus far, we have used to denote the detector efficiency of an honest Alice. A cheating Alice uses no detector, and therefore has no experimental inefficiency. However, she is still capable of submitting null results, and Bob has no way of telling whether such nulls (or any results) are genuine, except from calculating EPR-steering criteria. Thus, Bob would simply always regard Alice’s ratio of non-nulls to nulls as her apparent efficiency. From here on, this is what we will mean by .

Since a cheating Alice does not determine her values experimentally, but rather chooses them strategically, the same should be true of her null results. As we have mentioned in Sec. IV A, if a cheating Alice submits null results for any measurements, she can do so such that her terms correspond to the lowest values of predicted for Bob’s measurements upon . Thus, the terms to which Alice assigns could be selected such that their average is less than (and equivalently, ), meaning that the average of the terms would be greater than (and their square would be greater than ). Thus, it would be easily possible for a cheating Alice to obtain (and ).

However, even in the presence of this kind of cheating strategy, since and represent optimal strategies for , it can be deduced that a cheating Alice is still bound by (and ), because if it were possible to have a portion of the results average out to a value greater than (or ), then whatever strategy led to this result would work better at than the (or ) strategy does at . Therefore, since (and ) is defined by the optimal strategy at , it must be true that only the presence of EPR-steering could yield or . Thus, without calculating more sophisticated loss-tolerant bounds upon or , under this regime, an honest Alice would only convince Bob of EPR-steering if she possessed and such that or .

Thus, when Bob allows , the nonlinear inequalities become more powerful than when he does not (as in the “anger” strategy), as shown in Fig. 2. Moreover, for , the nonlinear tests for become more powerful than any of the linear tests we have constructed up to . Nevertheless, all the tests still fail for , as we can see from Fig. 2.

### iv.4 Dealing with loss: Hope

Clearly, it is important to take Alice’s detector efficiency into account when performing tests of EPR-steering. Although dealing with null results is difficult for a Bob who wishes to be completely rigorous, having seen that more powerful bounds can be obtained by allowing null results would surely give him hope that he might calculate yet more powerful EPR-steering bounds by further taking into account. Indeed, we will see that lower EPR-steering bounds are possible for both the and functions, upon a more detailed analysis of how these bounds should be calculated when .

The basic idea, developed in Sec. V, is that it is possible to calculate to exactly what extent a cheating Alice can use a simulated degree of loss to imitate quantum correlations, and from this obtain EPR-steering bounds as functions of both and . Such bounds have been previously calculated and experimentally employed for the additive correlation bound Bennett (). In Sec. V we present that calculation in detail and also calculate the best bounds using , and compare these two types of EPR-steering criteria, for , 3, 4, 6, and 10 measurements.

## V Loss-Tolerant EPR-Steering Inequalities

As discussed above, reporting null results can be somewhat advantageous for a cheating Alice if she does so with knowledge of the state she sends to Bob, and the setting he tells her to use in a given run. Say that Alice plans to declare non-null results only for of the possible settings Bob may communicate to her. Then her optimal strategy will be to send Bob a state that is better aligned with the measurements that Bob will make when Alice declares a non-null result. This strategy will be referred to as a “deterministic strategy”.

Note that if using only a single deterministic strategy, Alice could only feign efficiencies of . However, Alice does not need to constrain herself to a single value of for the entire experiment (different runs may elicit different apparent efficiencies). If Alice uses multiple deterministic strategies – a “nondeterministic strategy” – she is able to feign any measurable efficiency between and .

### v.1 Additive correlation bound

The maximal bounds on obtainable by using deterministic strategies will be labelled . As we will see later on, some deterministic bounds on are not true bounds, because a dishonest Alice can do better with a nondeterministic strategy, even with . The values of are calculated from

(20) |

where , and the sum is effectively only over members of because the set is subject to the condition that for elements, and for the remaining elements.

We can use to calculate the nondeterministic bound on the non-post-selected correlation function, which is

(21) |

where the sum over accounts for all possible optimal deterministic strategies that Alice could use. There is no benefit for Alice to use suboptimal deterministic strategies in any situation, so they are not considered, and each deterministic strategy can be indexed by . The term denotes the probability with which Alice chooses each deterministic strategy, so it is constrainted by and . From this form, it is clear to see that .

When is calculated to incorporate Alice’s null results in its averaging, thus avoiding any post-selection, its optimal bounds are described by Eq. (21). However, such post-selection is still convenient for many experiments. To imbue our bounds with the ability to be easily compared with post-selected results (and also to display them as lower bounds on the necessary , rather than on ), the numerical bounds that we present shall be formatted as maximal bounds on when it incorporates post-selection upon Alice reporting a non-null result. This is easily accomplished, as such bounds are simply given by

(22) |

Performing this post-selection is not making the fair sampling assumption, as these post-selected bounds still require Bob to keep track of Alice’s apparent efficiency. That is, is a completely rigorous upper bound on the post-selected correlation function , that is obtainable in a no-steering model.

In a quantum mechanical model where Alice is honest, and she and Bob share a Werner state as in Eq. (1), the calculated value of the post-selected will be . This means that EPR-steering can only be demonstrated (by ) using states with values of , the lines in Fig. 3. Note that for all , EPR-steering cannot ever be demonstrated for . With the loss-tolerant inequalities we derive here, that bound can be attained for , unlike all previous approaches where the bound was .

If Bob uses detectors that possess some efficiency that is less than perfect, then it is important to note that he does not need to add any assumptions about what Alice is doing because the optimal LHS model (optimal for imitating EPR-steering) still cannot permit Alice to detect or control when Bob’s detector does not yield a result. We trust that the FSA is valid for Bob’s detector because we can trust that Bob’s measurement efficiency cannot be affected by LHVs in any way that can assist a dishonest Alice in violating the EPR-steering inequality.

### v.2 Alice’s inefficient cheating strategies for

In Fig. 3, one should notice that in some places, the nondeterministic bounds are indeed greater than the deterministic bounds (and are never lower than the deterministic bounds). The deterministic bounds have been included (as markers) for the purpose of showing this.

For example, we can see that for , the nondeterministic bound does quite better than the deterministic bound. This is because a nondeterministic combination of the deterministic and strategies performs better in this case. The three optimal deterministic strategies at , 0.4, and are shown in Fig. 4. We remind the reader that the measurements available to Bob are defined by the vertices of Platonic solids, so it is easy to represent them graphically by plotting the solids that they describe (as in Fig. 1). These points are connected by the edges defining the shape so as to make them more obvious. The other coloured points (or lines, later) in the figure represent the spin orientations of the states that allow Alice to obtain the maximal deterministic bound (depending on her choice for ).

The ensemble orientations corresponding to the optimal deterministic strategies for are the points which are as close as possible to three vertices at a time. For , the optimal deterministic choices are those which are close to four nearby vertices. Note that there are two classes of states which satisfy this requirement equally well - five on each face (similar to those for ) is one class, and one on each vertex is the second class. For , the optimal orientation must be close to five vertices, which corresponds to a face-centred orientation on the Platonic solid shown.

Upon close inspection of Fig. 3, it is apparent that an equal mixture of the deterministic and strategies is better than the strategy. This can be explained as follows. The distance between any optimal spin orientation and its closest measurements is greater for than it is for (on average), and greater for than it is for (on average). However, this separation increases more sharply from to than it does for to . This difference in gradient is enough that the average separation (between measurement and optimal spin orientations) is greater for than it is for a weighted average of and strategies – weighted such that every of the time, a strategy will be used, and every of the time, a strategy will be used. Using this weighting of terms, it can also easily be seen that the weighted average of and is greater than , and indeed corresponds to the nondeterministic bound . This is the case for every deterministic strategy that is below the line of nondeterministic bounds.

One may note from the above reasoning that our analysis of Alice’s optimal cheating ensembles is relatively unchanged from that derived in Eq. (15), except applied only to whichever measurements Alice chooses to be non-null. Indeed, this proof is still completely valid for determining Alice’s optimal deterministic strategies regardless of whether or not the set includes any elements. Her optimal choice, given whatever measurements remain non-null for Bob, is still calculated in the same way: Her optimal orientation of the ensemble is simply a spatial average of the orientations of the remaining measurements.

This is even more apparent for the simpler case of measurements, shown in Fig. 5. Here, we can see the face-centred optimal ensembles, corresponding to the spatial averages over all trios of measurements, and yielding the highest values of when Alice submits no null results. Similarly, the edge-centred ensembles correspond to the closest possible points to any two vertices at a time, for when Alice omits one measurement out of every three. Finally, there are the vertex-centred points, which are optimal when Alice submits non-null results for only one measurement out of every three.

The optimal ensembles for measurements, shown in Fig. 6, can be described in almost exactly the same way for the face-centred (), edge-centred (), and vertex-centred () sets of optimal ensembles. But there is the addition of the () ensembles here, which are the points as close as possible to three vertices as a time – which is not as natural a task as it was for , and this is reflected in Fig. 3, where we see that these ensembles are actually inferior to a nondeterministic mixture of the and ensembles.

At a glance, the optimal ensembles for measurements, in Fig. 7, are also quite comparable to in behaviour: the optimal ensembles for and 3, for obvious reasons. But the ensembles are not face-centred, but vertex-centred in this case, because the closest arrangement of six (non-antipodal) vertices on this shape will be centred on one of those vertices. Thus, the ensembles overlap with the ensembles. The and ensembles also overlap, because the closest arrangements of four vertices happen to be edge-centred. The ensembles, however, are more inelegant, and are over an edge’s distance away from two vertices in each set of five that Alice submits non-null measurements for. It is not surprising that these are inferior to a nondeterministic mixing of the and strategies.

Figure 8 displays the optimal ensembles for measurements, including those shown in Fig. 4, for the purpose of illustrating the symmetries among the sets of arrangements. They will not all be discussed, but it is interesting to observe that there are overlaps between the and strategies, as well as the and strategies, which was also the case for the ensembles (and is, put simply, a result of the similarities between the vertex-centred arrangements of the closest measurements on these two shapes). Note also the trend that optimal states move closer to face-centred until , and then move back towards vertex-centred (which was also the case for ).

Something else of note is that for the Platonic solids – because they are defined by regularly spaced vertices – there is no particular measurement for which it is preferable for a cheating Alice to declare null results. When is high (), and is low, it becomes important to choose nulls such that the non-null measurements are close to one another, but there are no specific measurements that are ever more optimal to remain null or non-null. However, even if some measurements were more preferable to omit, a smart cheating Alice would not do this, as it would lead to an unnatural pattern that a clever Bob would easily discover. Thus, for similar reasons as those in constraining , it is strategically optimal for Alice to choose her sets of non-null results such that she submits equally often for each measurement (but offers no numerical advantage or disadvantage).

From numerical optimisation, and calculation of the and bounds, it can be seen that these same behaviours are evident in the optimal cheating ensembles for , and indeed, the optimal deterministic strategies for (which we will discuss below) were even observed to be the same strategies as above in some cases (though still quite different in others, as we shall see).

### v.3 Additive inference variance bound

As shown earlier, the additive variance criterion is

(23) |

The function of does not explicitly depend on Alice’s results, but implicitly uses them to calculate the expectation value of the above expression. The values of Alice’s results are only relevant for defining which of Bob’s measurements contribute to this expectation value and do not directly affect its outcome. Thus, for the case of a cheating Alice, the values that she defines for her non-null results will have no effect on the value of , and therefore, there is no optimal strategy for choosing them. However, is affected just as by which results Alice chooses to be null, as Bob must still discard his measurements on those outcomes.

The bound calculation for inefficient measurements is remarkably similar to that for when [refer to Eqs. (13) and (14)]. The only real difference is in the calculation of the matrix of Bob’s measurements, , which we have reason to relabel as , seeing that

where the included value of is not relevant to the maximisation [as it is in Eq. (20)] except when its value is zero. Therefore, it has been included as , which will only have any effect on this expression when (the other values of being limited to ).

However, taking care to treat correctly, it can be seen that the deterministic bounds on are simply

(24) | |||||

where the maximisation over values, given , is only over the choice of which are chosen to be zero or nonzero. The expression above has been arranged such that the manner of its similarity to Eq. (20) is most apparent. Indeed, this form of can be used to calculate deterministic bounds as comparably as possible to .

Accordingly, the nondeterministic bounds on are calculated from weighted averages of , as