Anomalous Weak Values Are Proofs of Contextuality

Anomalous Weak Values Are Proofs of Contextuality

Matthew F. Pusey Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada
November 12, 2014

The average result of a weak measurement of some observable can, under post-selection of the measured quantum system, exceed the largest eigenvalue of . The nature of weak measurements, as well as the presence of post-selection and hence possible contribution of measurement-disturbance, has led to a long-running debate about whether or not this is surprising. Here, it is shown that such “anomalous weak values” are non-classical in a precise sense: a sufficiently weak measurement of one constitutes a proof of contextuality. This clarifies, for example, which features must be present (and in an experiment, verified) to demonstrate an effect with no satisfying classical explanation.

In 1988 Aharonov, Albert and Vaidman explained “How the result of a measurement of a component of the spin of a spin- particle can turn out to be 100.” Aharonov et al. (1988) Defining the weak value of an observable for a quantum system prepared in state and post-selected on giving the first outcome of ,


they exhibited a and on a qubit for which . The motivation for weak values starts by considering a von Neumann model von Neumann (1996) of the measurement of . The strength of the interaction between the system and “pointer” is then drastically reduced, such that the pointer reading is correlated only slightly with . The weak value then arises as an approximation of the average pointer reading to first order in the interaction strength.

Weak values outside the eigenvalue range of are termed anomalous. Aside from possible practical applications (see Dressel et al. (2014a) and references therein), it has been suggested that such values have foundational significance. For example, both their theoretical prediction and experimental observation are said to shed light on “quantum paradoxes” Aharonov and Vaidman (1991); Resch et al. (2004); Aharonov et al. (2002); Lundeen and Steinberg (2009); Aharonov et al. (2013); Denkmayr et al. (2014) and even the nature of time Aharonov et al. (2010).

However, there is still no consensus on the most basic question about anomalous weak values: to what extent do they represent a genuinely non-classical effect? The lesser the extent, the more severe the limitations on their practical and foundational significance.

The arguments that anomalous weak values are non-classical have often been somewhat heuristic, appearing to depend on issues such as the extent to which weak measurements should be called measurements at all Leggett (1989); Aharonov and Vaidman (1989). Perhaps the most rigorous evidence provided so far is a connection between anomalous weak values and the failure of a notion of classicality called “macroscopic realism” Williams and Jordan (2008); Goggin et al. (2011); Dressel et al. (2011). On the other hand, classical models have been given that reproduce various aspects of the phenomena Dressel and Jordan (2012); Bliokh et al. (2013); Ferrie and Combes (2014).

The question can be made precise by asking if anomalous weak values constitute proofs of the incompatibility of quantum theory with non-contextual ontological models Spekkens (2005), or equivalently Spekkens (2008) if anomalous weak values require negativity in all quasi-probability representations. This was conjectured to be the case in Tollaksen (2007). Here I will prove it. Interestingly, the proof hinges on two issues already identified in the literature: what do weak measurements measure, and how much do they disturb the system? It transpires that both questions have clear answers in the setting of a non-contextual ontological model, but the particular information-disturbance tradeoff of the weak measurements in quantum theory makes these answers irreconcilable with the anomaly.

Let us begin by specifying exactly what is meant by an anomalous weak value. Inspection of eq. 1 shows that need not be real even though is Hermitian. A complex number will certainly not be a convex combination of the eigenvalues of , and so this might be seen as surprising. However, the imaginary part of is manifested very differently from the real part Jozsa (2007). Indeed complex weak values are easily obtained even in the Gaussian subset of quantum mechanics, which has weak measurements (with the same information-tradeoff disturbance utilised here) and yet admits a very natural non-contextual model Bartlett et al. (2012). Hence I will call a weak value anomalous only when is smaller than the smallest eigenvalue of , or larger than the largest eigenvalue of .

A simplification can be obtained by substituting the spectral decomposition into the RHS of eq. 1 and taking the real part:

If we had for all then could not be anomalous. Hence an anomalous weak value for any observable always implies an anomalous weak value for a projector. Since , if one projector has then another must have . In conclusion, without loss of generality we can always take the anomalous weak value to be associated with projector having .

I will now briefly review the relevant notion of non-contextuality, following Spekkens (2005) (where the definitions are motivated and compared to the traditional definition of non-contextuality due to Kochen and Specker Kochen and Specker (1968)). Assumptions of non-contextuality are constraints on an ontological model. I will only need two notions: measurement non-contextuality, and outcome determinism for sharp measurements. (The latter can be shown to itself follow from the assumption of preparation non-contextuality together with some simple facts about quantum theory, see Spekkens (2005, 2014) for details.)

Suppose we prepare a quantum system in some way, represented in quantum theory by a state . In an ontological model the preparation is represented by a probability distribution over a set of ontic states . Suppose we now implement the POVM . In a measurement non-contextual model, this is represented by a conditional probability distribution . The assumption of measurement non-contextuality is what allows us to write as a function of the effect and the ontic state only, with no dependence on other things (“contexts”), such as the other elements of the POVM or details of how the POVM was implemented. Outcome determinism for sharp measurements is the assumption that for all projectors and ontic states , so that any inability to predict the outcome of a projective measurement is due purely to ignorance of .

The final requirement, for any ontological model, is that when we marginalise over the ontic states, the model must reproduce the predictions of quantum theory:


We can now state the main result, identifying certain features in the measurement of anomalous weak values that, taken together, defy non-contextual explanation.

Theorem 1.

Suppose we have states and , and a generalized measurement Nielsen and Chuang (2000) , such that

  1. The pre- and post-selection are non-orthogonal, i.e.

  2. The POVM is a projector plus unbiased noise, i.e.


    for some projector , , and probability distribution with median ,

  3. We can define a probability (the “probability of disturbance”) such that


    for some POVM , and

  4. The values of under the pre- and post-selection have a negative bias that “outweighs” , i.e. 111Notice that although is a combination of operationally defined quantities, it is not exactly the probability of getting a negative under the pre- and post-selection. To obtain this, instead of dividing by one would have to divide by , making the analysis slightly more complicated (but still tractable).


Then there is no measurement non-contextual ontological model for the preparation of , measurement of , and post-selection of satisfying outcome determinism for sharp measurements.

(Showing that operators with these properties actually exist whenever we have a , and with is a routine calculation in the theory of weak measurement Aharonov et al. (1988); Jozsa (2007); Garretson et al. (2004), postponed until later. Loosely speaking, if is the strength of the measurement then to leading order whereas .)


Suppose such an ontological model exists. We can consider the weak measurement followed by the projective measurement as one “consolidated measurement”, represented by the POVM , where and . The key question is how the are represented in the model, because eq. 2 gives


Let us consider two methods for implementing the POVM . By the assumption of measurement non-contextuality they must both lead to the same . The first method is to implement the consolidated measurement and then ignore the result of the post-selection, giving . The second method, according to eq. 4, is to measure and then classically sample from or as appropriate. Hence we also have . Since the median of is we have . Combining this with from the first method, we have


Next, we apply the assumption of measurement non-contextuality to the POVM . One way to implement this is to use the consolidated measurement and ignore , hence . A second way, according to eq. 5, is to measure with probability and with probability . Hence .

Finally, we calculate the model’s prediction for . Using outcome determinism for the sharp measurement we can partition into where for . From the above we have that on . Hence splitting the RHS of (7) into integrals over and and integrating over gives

Applying eq. 8 and recalling that (2) gives this gives


in contradiction to eq. 6. ∎

As promised, I will now confirm that a projector with implies the existence of a measurement with the properties assumed in creftype 1.

Similarly to Aharonov et al. (1988), the measurement begins by preparing a probe system in the Gaussian state , with . This interacts with the system via the unitary (with )


which defines our units of momentum and hence length, and then the probe is projectively measured in the basis. On the system this is a generalised measurement with . Recalling that generates translations we have


This becomes a projective measurement in the limit , whereas it is known as a weak measurement for large . We can now calculate


where has median . Recalling that is normalised and defining


we obtain


Setting and (which is a projector) we have eq. 5.

Finally we need to calculate


where we have recalled eq. 1 and defined the integrals


Expanding around we find and . Since this gives


Meanwhile to leading order . Hence, provided , for sufficiently large we will satisfy eq. 6. It is worth emphasising that no approximations were made in the proof of creftype 1, and in a concrete case one can simply plug values of into the exact formulas above to verify eq. 6.

I will conclude by outlining three interconnected lessons from creftype 1. The first is a classification of how anomalous weak values could arise in an ontological model. One possibility (perhaps the most common realist interpretation of anomalous weak values) is that some ontic states are pre-disposed to manifest such values, in violation of the first application of measurement non-contextuality using eq. 4. Alternatively (along the lines of Ferrie and Combes (2014)) the weak measurement may disturb the system much more than the quantum formalism would suggest, in violation of the second application. The final possibility is that the post-selection is not represented outcome deterministically (as in the interpretation where the ontic state is simply the quantum state) and so fails to identify a particular set of ontic states.

The second lesson is that a large number of aspects of the manifestation of anomalous weak values seem to be involved in preventing non-contextual explanation. The “anomaly” itself is only one ingredient. Some others may have been anticipated, such as the favourable information-disturbance tradeoff of weak measurements. But some seem somewhat surprising, for example the importance of the post-selection being a projective measurement.

The final lesson is an indication of what it would take for an experiment involving anomalous weak values to exclude non-contextual theories that would provide a good classical explanation. Merely observing “anomalous pointer readings” under pre- and post-selection is far from sufficient. Most fundamentally, the experiment must show that the probabilities in the statement of creftype 1 really are the probabilities of discrete events, rather than mere (normalised) intensities. An experiment consistent with a classical field theory, so far the most common way to observe weak values, is therefore not sufficient 222This is because the analysis presented here, like any proof of contextuality, is for an ontological model that produces individual measurement results with the correct probabilities. This requirement immediately rules out a straightforward field ontology that, whilst perhaps offering interesting explanations of the weak value Bliokh et al. (2013); Berry and Popescu (2006); Dressel et al. (2014b), only produces intensities. To exclude non-contextual explanation, an experiment based on fields would have to justify this requirement by working at the level of single field quanta. Compare with the classic “double-slit experiment”: whilst an interference pattern in intensities has a simple classical explanation in terms of fields, the same interference pattern in what are unambiguously probabilities defies classical intuitions. One would also need to provide evidence for an operational version of eqs. 5 and 4. Notice that these would be statements about how the weak measurement works on all preparations, not just the one corresponding to . Furthermore, one would need an operational counterpart to the inference from preparation non-contextuality to outcome determinism for the post-selection measurement, perhaps by implementing preparations that make the post-selection highly predictable (see Spekkens and Kunjwal () for how this can be done in more traditional proofs of contextuality). Turning these ideas into a concrete experimental proposal is an interesting avenue for future work.

Thanks to Aharon Brodutch, Joshua Combes, Chris Ferrie, Ravi Kunjwal and Matt Leifer for useful discussions. I am particularly indebted to Matt for help in analysing measurement-disturbance, and to Aharon for bringing the issue of intensities versus probabilities to my attention. Research at Perimeter Institute is supported in part by the Government of Canada through NSERC and by the Province of Ontario through MRI.


Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description