Book Title

Book Title


We review an event-based simulation approach which reproduces the statistical distributions of wave theory not by requiring the knowledge of the solution of the wave equation of the whole system but by generating detection events one-by-one according to an unknown distribution. We illustrate its applicability to various single photon and single neutron interferometry experiments and to two Bell-test experiments, a single-photon Einstein-Podolsky-Rosen experiment employing post-selection for photon pair identification and a single-neutron Bell test interferometry experiment with nearly 100% detection efficiency.

Chapter 1 Event-based simulation of quantum physics experiments

K. Michielsen and H. De Raedt]K. Michielsen111Corresponding author
Int. J. Mod. Phys. C Vol. 25, No. 8 (2014) 1430003
and H. De Raedt

Institute for Advanced Simulation, Jülich Supercomputing Centre,
Forschungszentrum Jülich,
D-52425 Jülich, Germany and
RWTH Aachen University, D-52056 Aachen, Germany

Department of Applied Physics, Zernike Institute for Advanced Materials,
University of Groningen, Nijenborgh 4,
NL-9747 AG Groningen, The Netherlands

Keywords: computational techniques; discrete event simulation; quantum theory

PACS: 02.70.-c, 03.65.-w, 03.65.Ud


1 Introduction

The statistical properties of a vast number of laboratory experiments with individual entities such as electrons, atoms, molecules, photons, can be extremely well described by quantum theory. The mathematical framework of quantum theory allows for a straightforward calculation of numbers which can be compared with experimental data as long as these numbers refer to statistical averages of measured quantities, such as for example an interference pattern, the specific heat and magnetic susceptibility.

However, as soon as an experiment records individual clicks of a detector which contribute to the statistical average of a quantity then a fundamental problem appears. Quantum theory provides a recipe to compute the frequencies for observing events but it does not account for the observation of the individual events themselves, a manifestation of the quantum measurement problem. [1, 2] Examples of such experiments are single-particle interference experiments in which the interference pattern is built up by successive discrete detection events and Bell-test experiments in which two-particle correlations are computed as averages of pairs of individual detection events recorded at two different detectors and seen to take values which correspond to those of the singlet state in the quantum theoretical description.

An intriguing question to be answered is why individual entities which do not interact with each other can exhibit the collective behavior that gives rise to the observed interference pattern and why two particles, which only interacted in the past, after individual local manipulation and detection can show correlations corresponding to those of the singlet state. Since quantum theory postulates that it is fundamentally impossible to go beyond the description in terms of probability distributions, an answer in terms of a cause-and-effect description of the observed phenomena cannot be given within the framework of quantum theory.

We provide an answer by constructing an event-based simulation model that reproduces the statistical distributions of quantum (and Maxwell’s) theory without solving a wave equation but by modeling physical phenomena as a chronological sequence of events whereby events can be actions of an experimenter, particle emissions by a source, signal generations by a detector, interactions of a particle with a material and so on. [3, 4, 5] The underlying assumption of the event-based simulation approach is that current scientific knowledge derives from the discrete events which are observed in laboratory experiments and from relations between those events. Hence, the event-based simulation approach concerns what we can say about these experiments but not what “really” happens in Nature. This underlying assumption strongly differs from the premise that the observed discrete events are signatures of an underlying objective reality which is mathematical in nature.

The general idea of the event-based simulation method is that simple rules define discrete-event processes which may lead to the behavior that is observed in experiments. The basic strategy in designing these rules is to carefully examine the experimental procedure and to devise rules such that they produce the same kind of data as those recorded in experiment, while avoiding the trap of simulating thought experiments that are difficult to realize in the laboratory. Evidently, mainly because of insufficient knowledge, the rules are not unique. Hence, the simplest rules can be used until a new experiment indicates otherwise. On the one hand one may consider the method being entirely classical since it only uses concepts of the macroscopic world, but on the other hand one could consider the method being nonclassical because some of the rules are not those of classical Newtonian dynamics.

Obviously, using trial and error to find discrete-event rules that reproduce experimental results is unlikely to be successful. Instead, we started our search for useful rules by asking ourselves the question “by what kind of discrete-event rule should a beam splitter operate in order to mimic the build-up, event-by-event, of the interference pattern observed in the single-photon Mach-Zehnder experiments performed by Grangier et al. [6]?” The simplest rule (discussed below) that performs this task seems to be rather generic in the sense that it can be used to construct discrete-event processes that reproduce the results of many interference experiments. Of course, for some experiments, the simple rule is “too simple” and more sophisticated, backwards compatible variants are required. However, the guiding principle for designing the latter is the same as for the simple rule.

The event-based approach has successfully been used for discrete-event simulations of the single beam splitter and Mach-Zehnder interferometer experiments of Grangier et al. [6] (see Refs. 7, 8, 3), Wheeler’s delayed choice experiment of Jacques et al. [9] (see Refs. 10, 11, 3), the quantum eraser experiment of Schwindt et al. [12] (see Refs. 13, 3, 14), two-beam single-photon interference experiments and the single-photon interference experiment with a Fresnel biprism of Jacques et al. [15] (see Refs. 16, 3, 4), quantum cryptography protocols (see Ref. 17), the Hanbury Brown-Twiss experiment of Agafonov et al. [18] (see Refs. 19, 3, 20), universal quantum computation (see Refs. 21, 22), Einstein-Podolsky-Rosen-Bohm (EPRB)-type of experiments of Aspect et al. [23, 24] and Weihs et al. [25] (see Refs. 26, 27, 28, 29, 30, 31, 3, 4), the propagation of electromagnetic plane waves through homogeneous thin films and stratified media (see Refs. 32, 3), and neutron interferometry experiments (see Refs. 4, 5).

In this paper, we review the applicability of the event-based simulation method to various single-photon and single-neutron interferometry experiments and to Bell-test experiments. The paper is organized as follows. Section 2 is devoted to the single-particle two-slit experiment, one of the most fundamental experiments in quantum physics. We first discuss Feynman’s thought experiment, demonstrating single-electron interference, and briefly review its laboratory realizations. We then describe the two-beam experiment with single-photons, a variant of Young’s double slit experiment. It is seen that for these single-particle interference experiments quantum theory gives a recipe to compute the observed interference pattern after many detection events are registered, but quantum theory does not account for the one-by-one build-up process of the pattern in terms of the individual detection events. Hence, as formulated in section 3, the challenge is to come up with a set of rules which allow to produce detection events with frequencies which agree with a given distribution (in this particular case a two-slit interference pattern) without these rules referring, in any way, to the distribution itself. The event-based simulation method solves this challenging problem by modeling various physical phenomena as a chronological sequence of different events, such as actions of the experimenter, particles emitted by a source, signals generated by a detector and so on. In section 4 we explain the basis of the event-based simulation method by specifying rules which allow to reproduce the results of the quantum theoretical description of the idealized Stern-Gerlach experiment and of a single-photon experiment with a linearly birefringent crystal demonstrating Malus’ law, without making any use of quantum theoretical concepts. In this section, we also discuss the efficiency of two types of single-particle detectors used in the event-based simulation method. In section 5 we show that a similar set of rules can be used to simulate single-particle interference. We demonstrate this on the basis of the single-photon two-beam experiment thereby also exactly simulating Feynman’s thought experiment, the Mach-Zehnder interferometer experiment, Wheeler’s delayed choice experiment and a single-neutron interferometry experiment with a Mach-Zehnder type of interferometer. We explain why the event-based simulation method can produce interference without solving a wave problem. Section 6 is devoted to the event-based simulation of EPRB-type of experiments with correlated photon pairs and with neutrons with correlated spatial and spin degrees of freedom. Since both experiments are Bell-test experiments testing whether or not a Bell-CHSH (Clauser-Horne-Shimony-Holt) inequality can be violated, we also elaborate on the conclusions that can be drawn from such a violation. For both experiments we explain why the event-based model, a classical causal model, can produce the results of quantum theory. A discussion is given in section 7.

2 Two-slit and two-beam experiments

One of the most fundamental experiments in quantum physics is the single-particle double-slit experiment. Feynman stated that the phenomenon of electron diffraction by a double-slit structure is “impossible, absolutely impossible, to explain in any classical way, and has in it the heart of quantum mechanics. In reality it contains the only mystery.” [33] While Young’s original double-slit experiment helped establish the wave theory of light, [34] variants of the experiment over the years with electrons (see below), single photons (see below), neutrons, [35, 36] atoms [37, 38] and molecules [39, 40, 41] helped the development of ideas on concepts such as wave-particle duality in quantum theory. [2]

Two prevailing variants of the double-slit experiments can be recognized, one consists of a source and a screen with two apertures and another one consists of a source and a biprism. The first one is a real two-slit experiment in which the two slits can be regarded as two virtual sources and , the latter one is a two-beam experiment which can also be replaced by a system with two virtual sources and [42] In contrast to the two-slit experiment in which diffraction or scattering and interference phenomena play a role, the phenomenon of diffraction or scattering is absent in the two-beam experiment, except for the diffraction or scattering at the sources themselves.

A brief note on the difference in usage of the words diffraction, scattering and interference is here in place. Feynman mentioned in his lecture notes that “no-one has ever been able to define the difference between interference and diffraction satisfactorily. It is just a question of usage, and there is no specific, important physical difference between them.” [43] In classical optics, diffraction is the effect of a wave bending as it passes through an opening or goes around an object. The amount of bending depends on the relative dimensions of the object or opening compared to the wavelength of the wave. Interference is the superposition of two or more waves resulting in a new wave pattern. Therefore a double-slit, as well as a single-slit structure illuminated by (classical) light yields an interference (or diffraction) pattern due to diffraction and interference. In principle, diffraction and interference are phenomena observed only with waves. However, an interference pattern identical in form to that of classical optics can be observed by collecting many detector spots or clicks which are the result of electrons, photons, neutrons, atoms or molecules travelling one-by-one through a double-slit structure. In these experiments the so-called interference pattern is the statistical distribution of the detection events (spots at or clicks of the detector). Hence in these particle-like experiments, only the correlations between detection events reveal interference. Misleadingly this interference pattern is often called a diffraction pattern in analogy with classical optics where both the phenomena of diffraction and interference are responsible for the resulting pattern. In the particle-like experiment it would be better to replace the word diffraction by scattering because scattering refers to the spreading of a beam of particles (or a beam of rays) over a range of directions as a result of collisions with other particles or objects. In what follows we use the term interference pattern for the statistical distribution of detection events.

2.1 Two-slit experiment with electrons

In 1964 Feynman described a thought experiment consisting of an electron gun emitting individual electrons in the direction of a thin metal plate with two slits in it behind which is placed a movable detector. [33] Feynman made the following observations:

  • Sharp identical “clicks” which are distributed erratically, are heard from the detector.

  • The probability or of arrival, through one slit with the other slit closed, at position is a symmetric curve with its maximum located at the centre position of the open slit.

  • The probability of arrival through both slits looks like the intensity of water waves which propagated through two holes thereby forming a so-called “interference pattern” and looks completely different from the curve , a curve that would be obtained by repeating the experiment with bullets.

which lead him to the conclusions:

  • Electrons arrive at the detector in identical “lumps”, like particles.

  • The probability of arrival of these lumps is distributed like the distribution of intensity of a wave propagating through both holes.

  • It is in this sense that an electron behaves“sometimes like a particle and sometimes like a wave”.

Note that Feynman made his reasoning with probabilities , , , which he said to be proportional to the average rate of clicks , , . However, one cannot simply add and and compare the result with because these are probabilities for different conditions (different “contexts”), namely only slit 1 open, only slit 2 open and both slits 1 and 2 open, respectively. [2] Hence, no conclusions can be drawn from making the comparison between and .

Although Feynman wrote “you should not try to set up this experiment” because “the apparatus would have to be made on an impossibly small scale to show the effects we are interested in”, advances in (nano)technology made possible various laboratory implementations of his fundamental thought experiment. The first electron interference pattern obtained with an electron-biprism, the analog of a Fresnel biprism in optics, was reported in 1955. [44, 45] In 1961 Jönsson performed the first electron interference experiment with multiple (up to five) slits in the micrometer range. [46] However, these were not single-electron interference experiments since there was not just one electron in the apparatus at any one time. The first real single-electron interference experiments that were conducted were electron-biprism experiments (for a review see Refs. 47, 48) in which single electrons either pass to the left or to the right of a conducting wire (there are no real slits in this type of experiments). [49, 50, 51] In these experiments the interference pattern is built up from many independent detection events. Electron-electron interaction plays no role in the interference process since the electrons pass the wire one-by-one. More recently, single-electron interference experiments have been demonstrated with one-, two-, three and four slits fabricated by focused ion beam milling. [52, 53, 54] However, in these experiments only the final recorded electron intensity is shown. In a follow-up single-electron two-slit experiment a fast-readout pixel detector was used which allows the measurement of the distribution of the electron arrival times and the observation of the build-up of the interference pattern by individual detection events. [55] Hence, this experiment comes very close to Feynman’s thought experiment except that the two electron distributions for one slit open and the other one closed are not measured. Note that one of these distributions was measured in Ref. 52 by a non-reversible process of closing one slit and without using the fast-readout pixel detector. Very recently, it has been reported that a full realization of Feynman’s thought experiment has been performed. [56] In this experiment a movable mask is placed behind the double-slit structure to open/close the slits. Unfortunately, the mask is positioned behind the slits and not in front of them, so that all electrons always encounter a double-slit structure and are filtered afterwards by the mask. Hence, one could say that anno 2014 Feynman’s thought experiment has yet to be performed.

2.2 Two-beam experiment with photons

Another interesting variant of Young’s double slit experiment involves a very dim light source so that on average only one photon is emitted by the source at any time. Inspired by Thomson’s idea that light consists of indivisible units that are more widely separated when the intensity of light is reduced, [57] in 1909 Taylor conducted an experiment with a light source varying in strength and illuminating a needle thereby demonstrating that the diffraction pattern observed with a feeble light source (exposure time of three months) was as sharp as the one obtained with an intense source and a shorter exposure time. [58] In 1985, a double-slit experiment was performed with a low-pressure mercury lamp and neutral density filters to realize a very low-light level. [59] It was shown that at the start of the measurement bright dots appeared at random positions on the detection screen and that after a couple of minutes an interference pattern appeared. Demonstration versions of double-slit experiments illuminated by strongly attenuated lasers are reported in Refs. 60, 61 and in figure 1 of Ref. 62. However, attenuated laser sources are imperfect single-photon sources. Light from these sources attenuated to the single-photon level never antibunches, which means that the anticorrelation parameter . For a real single-photon source . In 2005, a variation of Young’s experiment was performed with a Fresnel biprism and a single-photon source based on the pulsed, optically excited photoluminescence of a single N-V colour centre in a diamond nanocrystal. [15] In this two-beam experiment there is always only one photon between the source and the detection plane. Is was observed that the interference pattern gradually builds up starting from a couple of dots spread over the screen for small exposure times. A time-resolved two-beam experiment has been reported in Refs. 63, 64. Recently, a temporally and spatially resolved two-beam experiment has been performed with entangled photons, providing insight in the dynamics of the build-up process of the interference pattern. [65]

2.3 The experimental observations and their quantum theoretical description

The common observation in these single-particle interference experiments, where “single particle” can be read as electron, photon, neutron, atom or molecule, is that individual detection events gradually build up an interference pattern and that the final interference pattern can be described by wave theory. In trying to give a pictorial (cause-and-effect) view of what is going on in these experiments, it is commonly assumed that there is a one-to-one correspondence between an emission event, “the departure of a single particle from the source” and a detection event, “the arrival of the single particle at the detector”. This assumption might be wrong. The only conclusion that can be drawn from the experiments is that there is some relation between the emission and detection events.

In view of the quantum measurement problem, [1, 2, 66] a cause-and-effect description of the observed phenomena is unlikely to be found in the framework of quantum theory. Quantum theory provides a recipe to compute the frequencies for observing events and thus to compute the final interference pattern which is observed after the experiment is finished. However, it does not account for the observation of the individual detection events building up the interference pattern. In fact quantum theory postulates that it is fundamentally impossible to go beyond the description in terms of probability distributions. Of course, one could simply use pseudo-random numbers to generate events according to the probability distribution that is obtained by solving the time-dependent Schrödinger equation. However, that is not the problem one has to solve as it assumes that the probability distribution of the quantum mechanical problem is known, which is exactly the knowledge that one has to generate without making reference to quantum theory. If we would like to produce, event-by-event, the interference pattern from Maxwell’s theory and do not want to generate events according to the known intensity function we would face a similar problem.

3 Theoretical challenge and paradigm shift

In general, the challenge is the following. Given a probability distribution of observing events, construct an algorithm which runs on a digital computer and produces events with frequencies which agree with the given distribution without the algorithm referring, in any way, to the probability distribution itself. Traditionally, the behavior of systems is described in terms of mathematics, making use of differential or integral equations, probability theory and so on. Although that this traditional modeling approach has been proven to be very successful it does not seem capable of tackling this challenge. This challenge requires something as disruptive as a paradigm shift. In scientific fields different from (quantum) optics or quantum mechanics in general, a paradigm shift has been realized in terms of a discrete-event approach to describe the often very complex collective behavior of systems with a set of very simple rules. Examples of this approach are the lattice Boltzmann model to describe the flow of (complex) fluids and the cellular automata of Wolfram. [67]

We have developed a discrete-event simulation method to solve the above mentioned challenging problem by modeling physical phenomena as a chronological sequence of events whereby events can be actions of the experimenter, particles emitted by a source, signals generated by a detector, particles impinging on material, and so on. The basic idea of the simulation method is to try to invent an algorithm which uses the same kind of events (data) as in experiment and reproduces the statistical results of quantum or wave theory without making use of this theory. An overview of the method and its applications can be found in Refs. 3, 4, 5. The method provides an “explanation” and “understanding” of what is going on in terms of elementary events, logic and arithmetic. Note that a cause-and-effect simulation on a digital computer is a “controlled experiment” on a macroscopic device which is logically equivalent to a mechanical device. Hence, an event-by-event simulation that reproduces results of quantum theory shows that there exists a macroscopic, mechanical model that mimics the underlying physical phenomena. This is completely in agreement with Bohr’s answer “There is no quantum world. There is only an abstract quantum mechanical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.” to the question whether the algorithm of quantum mechanics could be considered as somehow mirroring an underlying quantum world. [68] Although widely circulated, these sentences are reported by Petersen [68] and there is doubt that Bohr actually used this wording. [69]

4 Event-by-event simulation method

4.1 Stern-Gerlach experiment

We explain the basics of the event-by-event simulation method using the observations made in the Stern-Gerlach experiment. [70] The experiment shows that a beam of silver atoms directed through an inhomogeneous magnetic field splits into two components. The conclusion drawn by Gerlach and Stern is that, independent of any theory, it can be stated, as a pure result of the experiment, and as far as the exactitude of their experiments allows them to say so, that silver atoms in a magnetic field have only two discrete values of the component of the magnetic moment in the direction of the field strength; both have the same absolute value with each half of the atoms having a positive and a negative sign respectively. [71]

In quantum theory, the stationary state of the two-state system, which is the representation of the statistical experiment, is described by the density matrix , where denotes the Pauli vector and denotes the average direction of magnetic moments. The average measured magnetic moment in the direction is given by .

The fundamental question is how to go from the averages to the events observed in the experiment. Application of Born’s rule gives the probability to observe an atom in the beam (anti-)parallel to the direction


where () refers to the beam parallel (anti-parallel) to .

Given the probability in Eq. (4.1) the question is how to generate a sequence of “true” random numbers , each taking values , such that . Probability theory postulates that such a procedure exists but is silent about how the procedure should look like. In practice one could use a probabilistic processor, a device which responds to and processes input in a probabilistic way, employing pseudo-random number generators to generate a uniformly distributed pseudo-random number to produce if and otherwise. Repeating this procedure times gives . However, the form of with is postulated and the procedure is deterministic thereby only giving the illusion of randomness to everyone who does not know the details of the algorithm and the initial state of the pseudo-random generator. Hence, we accomplished nothing and the question is whether we can do better than by using this probabilistic processor.

Let us consider a deterministic processor, a deterministic learning machine (DLM), [8, 72] that receives input in the form of identical numbers


for . The processor has an internal state represented by a variable which adapts to the received input in a manner such that the difference with the input is minimal, namely


where with denoting the unit step function taking only the value 0 or 1 and is a learning parameter controlling both the speed and accuracy with which the processor learns the input value . The initial value of the internal state is chosen at random. The output numbers generated by the processor are


In general the behavior of the deterministic processor defined by Eq. (4.1) is difficult to analyze without a computer. However, the operation of the processor can be easily translated in simple computer code


Also without computer this code allows getting a quick notion on how the internal state of the processor adapts to the input. Taking as an example , and gives , , , From this step-by-step analysis it can be seen how comes closer to , goes further away from it to come closer again in a next step and how keeps oscillating around in the stationary regime. A detailed mathematical analysis of the dynamics of the processor defined by the rule Eq. (4.1) is given in Ref. 73. For we find that .

In conclusion, we designed an event-by-event process which can reproduce the results of the quantum theoretical description of the idealized Stern-Gerlach experiment without making use of any quantum theoretical concepts. The strategy employed by the processor is to minimize the distance between two numbers thereby “learning” the input number. Hence, at least one of the results of quantum theory seems to emerge from an event-based process, a dramatic change in the paradigm of the quantum science community.

4.2 Malus’ law

The important question is whether this event-based approach can also be applied to other experiments which up to now are exclusively described in terms of wave or quantum theory. To scrutinize this question we consider a basic optics experiment with a linearly birefringent crystal, such as calcite acting as a polarizer. A beam of linearly polarized monochromatic light impinging on a calcite crystal along a direction not parallel to the optical axis of the crystal is split into two beams travelling in different directions and having orthogonal polarizations. The two beams are referred to as the ordinary and extraordinary beam, respectively. [42] The intensity of the beams is given by Malus’ law, which has experimentally been established in 1810,


where , and are the intensities of the incident, ordinary and extraordinary beam, respectively, is the polarization of the incident light and specifies the orientation of the crystal. [42] Observations in single-photon experiments show that Malus’ law is also obeyed at the single-photon level.

In the quantum theoretical description of these single-photon experiments in which the photons are detected one-by-one in either the ordinary beam (represented by a detection event ) or in the extraordinary beam (represented by a detection event ) it is postulated that the polarizer sends a photon to the extraordinary direction with probability and to the ordinary direction with probability . Hence, quantum theory postulates that .

Following a procedure similar to that of the Stern-Gerlach experiment it is obvious that we can construct a simple probabilistic processor employing pseudo-random numbers to generate a uniform random number and send out a () event if () so that after repeating this procedure times we indeed have . However, again, by doing this we accomplished nothing because Malus’ law has been postulated from the start in the form with . Moreover, this probabilistic processor has a relatively poor performance [73] and therefore in what follows we design and analyze a much more efficient DLM that generates events according to Malus’ law.

The DLM mimicking the operation of a polarizer has one input channel, two output channels and one internal vector with two real entries. The DLM receives as input, a sequence of angles for and knows about the orientation of the polarizer through the angle . Using rotational invariance, we represent these input messages by unit vectors


Instead of the random number generator that is part of the probabilistic processor, the DLM has an internal degree of freedom represented by the unit vector . The direction of the initial internal vector is chosen at random. As the DLM receives input data, it updates its internal state. The update rules are defined by


corresponding to the output event and


corresponding to the output event . The parameter controls the learning process of the DLM. The -sign takes care of the fact that the DLM has to decide between two quadrants. The DLM selects one of the four possible outcomes for by minimizing the cost function defined by


Obviously, the cost is small (close to ), if the vectors and are close to each other. In conclusion, the DLM generates output events by minimizing the distance between the input vector and its internal vector by means of a simple, deterministic decision process.

Fig. 1.: The angle representing the internal vector of the DLM defined by Eqs. (\theequation) and (\theequation) as a function of the number of events . The input events are vectors . The direction of the initial internal vector is chosen at random. In this simulation For the ratio of the number of 0 events to 1 events is 1/3, which is . Data for lie on the decaying line but have been omitted to show the oscillating behavior more clearly. Lines are guides to the eye.

In general, the behavior of the DLM defined by the rules Eqs. (\theequation)–(\theequation) is difficult to analyze without using a computer. However, for a fixed input vector for , the DLM will minimize the cost Eq. (\theequation) by rotating its internal vector towards but will not converge to the input vector and will keep oscillating about . This is the stationary state of the machine. An example of a simulation is given in Fig. 1. Once the DLM has reached the stationary state the number of output events divided by the total number of output events is and thus in agreement with Malus’ law if we interpret the output events as corresponding to the extraordinary beam. Note that the details of the approach to the stationary state depend on the initial value of the internal vector , but the properties of the stationary state do not. A detailed stationary-state analysis is given in Ref. 72.

4.3 Single particle detection

In the event-based simulation of the Stern-Gerlach experiment and of the experiment demonstrating Malus’ law the two-valued output events () can be processed by two detectors placed behind the DLM modeling the Stern-Gerlach magnet and the calcite crystal, respectively. It can be easily seen that in these two experiments the only operation the detectors have to perform is to simply count every incoming output event . However, real single-particle detectors are often more complex devices with diverse properties. In our event-based simulation approach we model the main characteristics of these devices by rules as simple as possible to obtain similar results as those observed in a laboratory experiment. So far, we have designed two types of detectors, simple particle counters and adaptive threshold devices. [3] The adaptive threshold detector can be employed in the simulation of all single-photon experiments we have considered so far[3] but is absolutely essential in the simulation of for example the two-beam single photon experiment (see Sect. 5.1).

The efficiency, defined as the ratio of detected to emitted particles, of our model detectors is measured in an experiment with one single-particle point source placed far away from the detector. If the detector is a simple particle counter then the efficiency is 100%, if it is an adaptive threshold detector then the efficiency is nearly 100%. Since no absorption effects, dead times, dark counts, timing jitter or other effects causing particle miscounts are simulated, these model detectors are highly idealized versions of real single-photon detectors.

Evidently, the efficiency of a detector plays an important role in the overall detection efficiency in an experiment, but it is not the only determining factor. Also the experimental configuration, as well in the laboratory experiment as in the event-based simulation approach, in which the detector is used plays an important role. Although the adaptive threshold detectors are ideal and have a detection efficiency of nearly 100%, the overall detection efficiency can be much less than 100% depending on the experimental configuration. For example, using adaptive threshold detectors in a Mach-Zehnder interferometry experiment leads to an overall detection efficiency of nearly 100% (see Sect. 5.2.1), while using the same detectors in a single-photon two-beam experiment (see Sect. 5.1.1) leads to an overall detection efficiency of about 15%. [3, 16] For the simple particle counters the configuration has no influence on the overall detection efficiency. Apart from the configuration, also the data processing procedure which is applied after the data has been collected may have an influence on the final detection efficiency. An example is the postselection procedure with a time-coincidence window which is employed to group photons, detected in two different stations, into pairs. [25] Even if in the event-based simulation approach simple particle counters with a 100% detection efficiency are used and thus all emitted photons are accounted for during the data collection process, the final detection efficiency is less than 100% because some detection events are omitted in the post-selection data procedure using a time-coincidence window.

In conclusion, even if ideal detectors with a detection efficiency of 100% would be commercially available, then the overall detection efficiency in a single-particle experiment could still be much less than 100% depending on (i) the experimental configuration in which the detectors are employed and (ii) the data analysis procedure that is used after all data has been collected.

5 Single particle interference

The particle-like behavior of photons has been shown in an experiment composed of a single 50/50 beam splitter (BS), of which only one input port is used, and a source emitting single photons and pairs of photons. [6] The wave mechanical character of the collection of photons has been demonstrated in single-particle interference experiments such as the single-photon two-beam experiment [15] (see Sect. 5.1), an experiment which shows, with minimal equipment, interference in its purest form (without diffraction), and the single-photon Mach-Zehnder interferometer (MZI) experiment [6] (see Sect. 5.2).

The three experiments have in common that, if one analyzes the data after collecting detection events, long after the experiment has finished, the averages of the detection events agree with the results obtained from wave theory, that is with the classical theory of electrodynamics (Maxwell theory). In the first experiment one obtains a constant intensity of 0.5 at both detectors placed at the output ports of the BS, in the other two experiments one obtains an interference pattern. However, since the source is not emitting waves but so-called single photons [6, 15] the question arises how to interpret the output which seems to show particle or wave character depending on the circumstances of the experiment. This question is not limited to photons. Already in 1924, de Broglie introduced the idea that also matter can exhibit wave-like properties. [74]

To resolve the apparent behavioral contradiction, quantum theory introduces the concept of particle-wave duality [1]. As a result, these single-particle experiments are often considered to be quantum experiments. However, the pictorial description using concepts from quantum theory, when applied to individual detection events (not to the averages) leads to conclusions that defy common sense: The photon (electron, neutron, atom, molecules, ) seems to change its representation from a particle to a wave while traveling from the source to the detector in the single-photon interference experiments.

In 1978, Wheeler proposed a gedanken experiment, [75] a variation on Young’s double slit experiment, in which the decision to observe wave or particle behavior is postponed until the photon has passed the slits. An experimental realization of Wheeler’s delayed choice experiment with single-photons traveling in an open or closed configuration of an MZI has been reported in Refs. 9, 76. The outcome, that is the average result of many detection events, is in agreement with wave theory (Maxwell or quantum theory). However, the pictorial description using concepts of quantum theory to explain the experimental facts [9] is even more strange than in the above mentioned experiments: The decision to observe particle or wave behavior influences the behavior of the photon in the past and changes the representation of the photon from a particle to a wave.

A more sensical description of the observation of individual detection events and of an interference pattern after many single detection events have been collected in single-particle interference experiments, can be given in terms of the event-based simulation approach. This finding is not in contradiction with Feynman’s statement that electron (single particle) diffraction by a double-slit structure is “impossible, absolutely impossible, to explain in any classical way, and has in it the heart of quantum mechanics” [33]. Reading “any classical way” as “any classical Hamiltonian mechanics way”, Feynman’s statement is difficult to dispute. However, taking a broader view by allowing for dynamical systems that are outside the realm of classical Hamiltonian dynamics, it becomes possible to model the gradual appearance of interference patterns through the event-by-event simulation method.

5.1 Two-beam experiment

Fig. 2.: Schematic diagram of a two-beam experiment with single-particle sources and of width , separated by a center-to-center distance . In a first experiment, which can be seen as a variant of Young’s double slit experiment, single particles leave the sources and one-by-one, at positions drawn randomly from a uniform distribution over the interval and travel in the direction given by the angle , a uniform pseudo-random number between and . In a second experiment, a movable mask is placed behind the sources which can block either or . The sources and alternately emit particles one-by-one, until a total of particles has been emitted ( and with an integer number). In both experiments, particles are emitted one-by-one either from or from and at any time there is only one particle traveling from source to detector. The particles are recorded by detectors positioned on a semi-circle with radius and center . The angular position of a detector is denoted by .

We consider the experiment sketched in Fig. 2. Single particles coming from two coherent beams gradually build up an interference pattern when the particles arrive one-by-one at a detector screen. This two-beam experiment can be viewed as a simplification of Young’s double-slit experiment in which the slits are regarded as the virtual sources and (see Ref. 42) and can be used to perform Feynman’s thought experiment in which both slits are open or one is open and the other one closed. In the event-based model of this experiment particles are created one at a time at one of the sources and are detected by one of the detectors forming the screen. We assume that all these detectors are identical and cannot communicate among each other. We also do not allow for direct communication between the particles. This implies that this event-by-event model is locally causal by construction. Then, if it is indeed true that individual particles build up the interference pattern one-by-one, just looking at Fig. 2 leads to the logically unescapable conclusion that the interference pattern can only be due to the internal operation of the detector [77]. Detectors which simply count the incoming particles are not sufficient to explain the appearance of an interference pattern and apart from the detectors there is nothing else that can cause the interference pattern to appear. Making use of the statistical property of quantum theory one could assume that if a detector is replaced by another one as soon as it has detected one particle, one obtains similar interference patterns if the detection events of all these different detectors are combined or if only one detector detects all the particles. However, since there is no experimental evidence confirming this assumption and since our event-based approach is based on laboratory experimental setups and observations we do not consider this being a realistic option. Thus, logic dictates that a minimal event-based model for the two-beam experiment requires an algorithm for the detector that does a little more than just counting particles.

5.1.1 Event-based model

In what follows we specify the event-by-event model for the single-photon two-beam experiment (see Fig. 2) in sufficient detail such that the reader who is interested can reproduce the simulation results (a Mathematica implementation of a slightly more sophisticated algorithm [16] can be downloaded from the Wolfram Demonstration Project web site [78]).

  • Source and particles: In the first experiment described in Fig. 2, photons leave the sources one-by-one, at positions drawn randomly from a uniform distribution over the interval . In the second experiment the sources alternately emit photons one-by-one until a total of photons has been emitted. Here, and , where denotes an integer number. The photons are regarded as messengers, traveling in the direction specified by the angle , being a uniform pseudo-random number between and . Each messenger carries a message


    represented by a harmonic oscillator which vibrates with frequency (representing the “color” of the light). The internal oscillator operates as a clock to encode the time of flight , which is set to zero when a messenger is created, thereby modeling the coherence of the two single-particle beams.

    This pictorial model of a “photon” was used by Feynman to explain quantum electrodynamics. [79] The event-based approach goes one step further in that it specifies in detail, in terms of a mechanical procedure, how the “amplitudes” which appear in the quantum formalism get added together. In Feynman’s path integral formulation of light propagation, which is essentially quantum mechanical, the amplitude was obtained by summing over all possible paths. [79]

    The time of flight of the particles depends on the source-detector distance. Here, we discuss as an example, the experimental setup with a semi-circular detection screen (see Fig. 2) but in principle any other geometry for the detection screen can be considered. The messenger leaving the source at under an angle will hit the detector screen of radius at a position determined by the angle given by , where . The time of flight is then given by , where is the velocity of the messenger. The messages together with the explicit expression for the time of flight are the only input to the event-based algorithm.

  • Detector: Here we describe the model for one of the many identical detectors building up the detection screen. Microscopically, the detection of a particle involves very intricate dynamical processes [66]. In its simplest form, a light detector consists of a material that can be ionized by light. This signal is then amplified, usually electronically, or in the case of a photographic plate by chemical processes. In Maxwell’s theory, the interaction between the incident electric field and the material takes the form , where is the polarization vector of the material [42]. Assuming a linear response, for a monochromatic wave with frequency , it is clear that in the time domain, this relation expresses the fact that the material retains some memory about the incident field, representing the memory kernel that is characteristic for the material used.

    In line with the idea that an event-based approach should use the simplest rules possible, we reason as follows. In the event-based model, the th message is taken to represent the elementary unit of electric field . Likewise, the electric polarization of the material is represented by the vector . Upon receipt of the th message this vector is updated according to the rule


    where and . Obviously, if , a message processor that operates according to the update rule Eq. (\theequation) has memory, as required by Maxwell’s theory. It is not difficult to prove that as , the internal vector converges to the average of the time-series  [16, 3]. By reducing , the number of messages needed to adapt decreases but also the accuracy of the DLM decreases. In the limit that , the DLM learns nothing, it simply echoes the last message that it received. [8, 7] The parameter controls the precision with which the DLM defined by Eq. (\theequation) learns the average of the sequence of messages and also controls the pace at which new messages affect the internal state of the machine [7]. Moreover, in the continuum limit (meaning many events per unit of time), the rule given in Eq. (\theequation) translates into the constitutive equation of the Debye model of a dielectric [16, 80], a model used in many applications of Maxwell’s theory [81].

    After updating the vector , the DLM uses the information stored in to decide whether or not to generate a click. As a highly simplified model for the bistable character of the real photodetector or photographic plate, we let the machine generate a binary output signal according to


    where is the unit step function and is a uniform pseudo-random number. Note that the use of pseudo-random numbers is convenient but not essential [3]. Since in experiment it cannot be known whether a photon has gone undetected, we discard the information about the detection events and define the total detector count as , where is the number of messages received. is the number of clicks (one’s) generated by the processor.

    The efficiency of the detector model is determined by simulating an experiment that measures the detector efficiency, which for a single-photon detector is defined as the overall probability of registering a count if a photon arrives at the detector [82]. In such an experiment a point source emitting single particles is placed far away from a single detector. As all particles that reach the detector have the same time of flight (to a good approximation), all the particles that arrive at the detector will carry the same message which is encoding the time of flight. As a result (see Eq. (\theequation)) rapidly converges to the vector corresponding to this message, so that the detector clicks every time a photon arrives. Thus, the detection efficiency, as defined for real detectors [82], for our detector model is very close to 100%. Hence, the model is a highly simplified and idealized version of a single-photon detector. However, although the detection efficiency of the detector itself may be very close to 100%, the overall detection efficiency, which is the ratio of detected to emitted photons in the simulation of an experiment, can be much less than one. This ratio depends on the experimental setup.

  • Simulation procedure: Each of the detectors of the circular screen has a predefined spatial window within which it accepts messages. As a messenger hits a detector, this detector updates its internal state , (the internal states of all other detectors do not change) using the message and then generates the event . In the case (), the total count of the particular detector that was hit by the th messenger is (not) incremented by one and the messenger itself is destroyed. Only after the messenger has been destroyed, the source is allowed to send a new messenger. This rule ensures that the whole simulation complies with Einstein’s criterion of local causality. This process of creating and destroying messengers is repeated many times, building up the interference pattern event by event. Note that the number of emitted photons is larger than the sum of the number of clicks generated by all the detectors forming the detection screen although no photons are lost during their travel from source to detector.

5.1.2 Simulation results

In Fig. 3(a), we present simulation results for the first experiment for a representative case for which the analytical solution from wave theory is known. Namely, in the Fraunhofer regime (), the analytical expression for the light intensity at the detector on a circular screen with radius is given by [42]


where is a constant, denotes the wavenumber with and being the frequency and velocity of the light, respectively, and denotes the angular position of the detector on the circular screen, see Fig. 2. Note that Eq. (\theequation) is only used for comparison with the simulation data and is by no means input to the model. From Fig. 3(a) it is clear that the event-based model reproduces the results of wave theory and this without taking recourse of the solution of a wave equation.

As the detection efficiency of the event-based detector model is very close to 100%, the interference patterns generated by the event-based model cannot be attributed to inefficient detectors. It is therefore of interest to take a look at the ratio of detected to emitted photons, the overall detection efficiency, and compare the detection counts, observed in the event-by-event simulation of the two-beam interference experiment, with those observed in a real experiment with single photons [15]. In the simulation that yields the results of Fig. 3(a), each of the 181 detectors making up the detection area is hit on average by photons and the total number of clicks generated by the detectors is . Hence, the ratio of the total number of detected to emitted photons is of the order of 0.16, two orders of magnitude larger than the ratio observed in single-photon interference experiments [15].

In Fig. 3(b), we show simulation results for the experiment in which first only source emits photons (downward triangles) while is blocked by the mask. Then in a new experiment (all detectors are reset) emits photons while is blocked (upward triangles). The sum of the two resulting detection curves is given by the curve with open squares. It is clear that this curve is completely different from the curve depicted in Fig. 3(a), as is also described in Feynman’s thought experiment (see Sect. 2.1). Also in Fig. 3(b) we present the simulation results for the experiment in which first the source emits a group of particles one-by-one and then the source emits particles one-by-one (no resetting of the detectors). The resulting detection curve is drawn with closed circles. For small values of there is a difference between the curves with open squares and closed circles. This difference is due to the memory effect which is present in the detector model. Obviously this difference depends on and the detector model that is used. For more complicated detector models than the one given by Eq. (\theequation) this small difference disappears (results not shown).

Fig. 3.: Detector counts (markers) as a function of as obtained from the event-based simulation of the two-beam interference experiments described in Fig. 2. Simulation parameters: so that on average, each of the 181 detectors, positioned on the semi-circular screen with an angular spacing of in the interval , receives about particles, , , , , where denotes the velocity and the frequency of the particles ( in our simulations). (a): first experiment in which sources and in random order emit in total particles one-by-one. This experiment resembles Young’s (and Feynman’s) two-slit experiment. (b): first experiment in which only source or emits particles one-by-one (downward and upward triangles, respectively). The open squares are the sum of the detector counts of the two experiments with one source emitting and the other one blocked. This experiment resembles Feynman’s two-slit experiment with first slit blocked and then slit blocked. The closed circles are the result of the second experiment in which first and then emit a group of particles one-by-one. (c): second experiment with . (d): second experiment with . The solid line in (a), (c) and (d) is a least-square fit of the simulation data of (a) to the prediction of wave theory, Eq. (\theequation), with only one fitting parameter.

Figs. 3(c),(d) depict simulation results of the experiment in which sources and alternately emit particles one-by-one with and , respectively. It is seen that except for very large values of (), the interference pattern is the same as the one shown in Fig. 3(a). Nevertheless, for these large values of interference can still be observed. This is a result of the memory effects built in the detector model. However, for any value of , a simple quantum theoretical calculation would predict no interference pattern but an intensity pattern which is the sum of two single slit patterns, as the particles pass through one or the other slit, and never through both. Hence, for this type of experiment the predictions of quantum theory and of the event-based model differ.

Although we are not aware of any experiment that precisely tests the above described scenario, one experimental study in which only one slit was available to each photon [83] produced intriguing results. In that study, an opaque barrier, all the way from the laser source to the obstacle between the two slits, was used to make sure that photons had one or the other slit available to them. The interference pattern observed was nevertheless essentially unchanged despite the presence of the barrier. We are, however, not aware of any follow-up work on that study.

5.1.3 Why is interference produced without solving a wave problem?

As mentioned earlier, using simple particle counters as detectors would not result in an interference pattern. Essential to produce an interference pattern is to account for the information about the differences in the times of flight (or phase differences) of the particles which encode the distance the particles travelled from one of the two sources to one of the detectors constituting the circular detection screen. Simple particle counters do nothing with the information which is encoded in the messages carried by the particles and produce a click for each incoming particle. Since, in the single-photon two-beam experiment the detectors are the only apparatuses available that can process these phase differences (there are no other apparatuses present except for the source) we necessarily need to employ an algorithm for the detector that exploits this information in order to produce the clicks that gradually build up the interference pattern. A collection of about two hundred independent adaptive threshold detectors defined by Eq. (\theequation) and Eq. (\theequation) and each with a detection efficiency of nearly 100% is capable of doing this. As pointed out earlier, the reason why, in this particular experiment, this is possible is that not every particle that impinges on the detector yields a click.

5.2 Mach Zehnder interferometer experiment

5.2.1 Event-based model

The DLM network that simulates a single-photon MZI experiment (see Fig. 4 (left)) consists of a source, two identical BSs two phase shifters and two detectors. The network of processing units is a one-to-one image of the experimental setup. [6] Note that the two mirrors in the MZI simply bend the paths of the photons by without introducing a phase change or loss of particles and therefore they do not need to be considered in the event-based simulation network. In what follows we specify the processing units in sufficient detail such that the reader who is interested can reproduce the simulation results. We require that the processing units for identical optical components should be reusable within the same and within different experiments. Demonstration programs, including source codes, are available for download [84, 85].

  • Source and particles: In a pictorial description of the experiment depicted in Fig. 4 (left) the photons, leaving the source one-by-one, can be regarded as particles playing the role of messengers. Each messenger carries a message


    where denotes the frequency of the light source and the time that particles need to travel a given path. The subscript numbers the consecutive messengers and labels the channel of the BS at which the messenger arrives (see below). Note that in this experiment no explicit information about distances and frequencies is required since we can always work with relative phases.

    When a messenger is created its internal clock time is set to zero () and since the source is connected to the input channel of the first BS the messenger gets the label (see Fig. 4 (left)).

    Fig. 4.: Left: Schematic diagram of a Mach-Zehnder interferometer (MZI) with a single-photon source . The MZI consists of two beam splitters, BS1 and BS2, two phase shifters and and two mirrors. () and () count the number of events in the output channel 0 of BS1 (BS2) and in the output channel 1 of BS1 (BS2), respectively. Dividing for by the total count yields the relative frequency of finding a photon in the corresponding arm of the interferometer. Since photon detectors operate by absorbing photons, in a real laboratory experiment only and can be measured by detectors and , respectively. Right: Simulation results for the normalized detector counts (markers) as a function of . Input channel 0 receives with probability one. One uniform random number in the range is used to choose the angle . Input channel 1 receives no events. The parameter . Each data point represents 10000 events (). Initially the rotation angle and after each set of 10000 events, is increased by . Open squares: ; solid squares: for ; open circles: for ; solid circles: for ; asterisks: for ; solid triangles: for . Lines represent the results of quantum theory [86].
  • Beam splitter (BS): A BS is an optical component that partially transmits and partially reflects an incident light beam. Dielectric plate BSs are often used as 50/50 BSs. From classical electrodynamics we know that if an electric field is applied to a dielectric material the material becomes polarized. [42] Assuming a linear response, the polarization vector of the material is given by for a monochromatic wave with frequency . In the time domain, this relation expresses the fact that the material retains some memory about the incident field, representing the memory kernel that is characteristic for the material used. We use this kind of memory effect in our algorithm to model the BS.

    A BS has two input and two output channels labeled by 0 and 1 (see Fig 4 (left)). Note that in case of the MZI experiment, for beam splitter BS1 only entrance port is used. In the event-based model, the BS has two internal registers (one for each input channel) and an internal vector with the additional constraints that for and that . As we only have two input channels, the latter constraint can be used to recover from the value of . We prefer to work with internal vectors that have as many elements as there are input channels. These three two-dimensional vectors , and are labeled by the message number because their content is updated every time the BS receives a message. Before the simulation starts we set , where is a uniform pseudo-random number. In a similar way we use pseudo-random numbers to set and .

    When the th messenger carrying the message arrives at entrance port or of the BS, the BS first stores the message in the corresponding register and updates its internal vector according to the rule


    where is a parameter that controls the learning process and () if the th event occurred on channel (). By construction for and . Hence the update rule Eq. (\theequation) preserves the constraints on the internal vector. Obviously, these constraints are necessary if we want to interpret the as (an estimate of) the frequency for the occurrence of an event of type . Note that the BS stores information about the last message only. The information carried by earlier messages is overwritten by updating the internal registers. From Eq. (\theequation), one could say that the internal vector (corresponding to the material polarization ) is the response of the BS to the incoming messages (photons) represented by the vectors (corresponding to the elementary unit of electric field ). Therefore, the BS “learns” so to speak from the information carried by the photons. The characteristics of the learning process depend on the parameter (corresponding to the response function ).

    Next, in case of a 50/50 BS, the BS uses the six numbers stored in , and to calculate four numbers , , , and . These four real-valued numbers can be considered to represent the real and imaginary part of two complex numbers and which are obtained by the following matrix-vector multiplication


    Identifying with and with it is clear that the computation of the four numbers for plays the role of the matrix-vector multiplication in the quantum theoretical description of a beam-splitter


    where and denote the input and output amplitudes, respectively. Note however that the DLM for the BS computes the four numbers for for each incoming event thereby always updating and or . Hence, and , and thus also and , are constructed event-by-event and only under certain conditions (, sufficiently large number of input events , stationary sequence of input events) they correspond to their quantum theoretical counterparts , with () and , (see Eq. (\theequation)).

    In a final step the BS uses for to create an output event. Therefore it generates a uniform random number between zero and one. If , the BS sends a message


    through output channel 0. Otherwise it sends a message


    through output channel 1.

  • Phase shifters: These devices perform a plane rotation on the vectors (messages) carried by the particles. As a result the phase of the particles is changed by or depending on the route followed.

  • Detector: Detector () registers the output events at channel 0 (1). The detectors are ideal particle counters, meaning that they produce a click for each incoming particle. Hence, we assume that the detectors have 100% detection efficiency. Note that also adaptive threshold detectors can be used (see Sect. 5.1.1) equally well. [3]

  • Simulation procedure: When a messenger is created we wait until its message has been processed by one of the detectors before creating the next messenger. This ensures that there can be no direct communication between the messengers and that our simulation model (trivially) satisfies Einstein’s criterion of local causality. We assume that no messengers are lost. Since the detectors are ideal particle counters the number of clicks generated by the detectors is equal to the number of messengers created by the source. For fixed , a simulation run of events generates the data set . Here indicates which detector fired ( or ). Given the data set , we can easily compute the number of 0 (1) output events ().

5.2.2 Simulation results

In Fig. 4 (right), we present a few simulation results for the MZI and compare them to the quantum theoretical result. According to quantum theory, the amplitudes in the output modes 0 and 1 of the MZI are given by [87]


where and denote the input amplitudes. For the particular choice and , in which case there are no particles entering BS1 via channel 1, it follows from Eq. (\theequation) that


For the results presented in Fig. 4 (right) we assume that input channel 0 receives with probability one and that input channel 1 receives no events. This corresponds to . We use a uniform random number to determine . Note that this random number is used to generate all input events. The data points are the simulation results for the normalized intensity for as a function of . Note that in an experimental setting it is impossible to simultaneously measure (, ) and (, ) because photon detectors operate by absorbing photons. In the event-based simulation there is no such problem. From Fig. 4 (right) it is clear that the event-based processing by the DLM network reproduces the probability distribution of quantum theory, see Eq. (\theequation) with () corresponding to ().

5.2.3 Why is interference produced without solving a wave problem?

We consider BS2 of the MZI depicted in Fig. 4 (left), the beam splitter at which, in a wave picture, the two beams join to produce interference. The DLM simulating a BS requires two pieces of information to send out particles such that their distribution matches the wave-mechanical description of the BS. First, it needs an estimate of the ratio of particle currents in the input channels 0 and 1 (paths 0 and 1 of the MZI), respectively. Second, it needs to have information about the time of flight (phase difference) along the two different paths of the MZI. The first piece of information is provided for by the internal vector . Through the update rule Eq. (\theequation), for a stationary sequence of input events, and converge to the average of the number of events on input channels 0 and 1, respectively. Thus, the ratio of the particles (corresponding to the intensities of the waves) in the two input beams are encoded in the vector . Note that this information is accurate only if the sequence of input events is stationary. After one particle arrived at port 0 and another one arrived at port 1, the second piece of information is available in the registers and . This information plays the role of the phase of the waves in the two input beams. Hence, all the information (intensity and phase) is available to compute the probability for sending out particles. This is done by calculating the numbers for which, in the stationary state, are identical to the wave amplitudes obtained from the wave theory of a beam splitter. [42]

5.3 Wheeler’s delayed choice experiment

In a recent experimental realization of Wheeler’s delayed-choice experiment by Jacques et al. [76] linearly polarized single photons are sent through a polarizing beam splitter (PBS) that together with a second, movable, variable output PBS with adjustable reflectivity forms an interferometer (see Fig. 5). In the first realization [9] two 50/50 BSs were used.

Tilting the PBS of the variable output BS induces a time-delay in one of the arms of the MZI, symbolically represented by the variable phase in Fig. 5, and thus varies the phase shift between the two arms of the MZI. A voltage applied to an electro-optic modulator (EOM) controls the reflectivity of the variable beam splitter BS. If no voltage is applied to the EOM then . Otherwise, (see Eq. (2) in Ref. 76) and the EOM acts as a wave plate which rotates the polarization of the incoming photon by an angle depending on the value of . The voltage applied to the EOM is controlled by a set of pseudo-random numbers generated by the random number generator RNG. The key point in this experiment is that the decision to apply a voltage to the EOM is made after the photon has passed BS.

For measured values of the interference visibility [88] and the path distinguishability [76] , a parameter that quantifies the which-path information (WPI), were found to fulfill the complementary relation [76] For and , obtained for and , respectively, full and no WPI was found, associated with particle like and wavelike behavior, respectively. For partial WPI was obtained while keeping interference with limited visibility. [76]

Although the detection events (detector “clicks”) are the only experimental facts and logically speaking one cannot say anything about what happens with the photons traveling through the setup, Jacques et al. [9, 76] gave the following pictorial description: Linearly polarized single photons are sent through a 50/50 PBS (BS), spatially separating photons with S polarization (path 0) and P polarization (path 1) with equal frequencies. After the photon has passed BS, but before the photon enters the variable BS the decision to apply a voltage to the EOM is made. The PBS of BS merges the paths of the orthogonally polarized photons travelling paths 0 and 1 of the MZI, but afterwards the photons can still be unambiguously identified by their polarizations. If no voltage is applied to the EOM then and the EOM does nothing to the photons. Because the polarization eigenstates of the Wollaston prism correspond to the P and S polarization of the photons travelling path 0 and 1 of the MZI, each detection event registered by one of the two detectors or is associated with a specific path (path 0 or 1, respectively). Both detectors register an equal amount of detection events, independent of the phase shift in the MZI. This experimental setting clearly gives full WPI about the photon within the interferometer (particle behavior), characterized by . In this case no interference effects are observed and thus . When a voltage is applied to the EOM, then and the EOM rotates the polarization of the incoming photon by an angle depending on . The Wollaston prism partially recombines the polarization of the photons that have travelled along different optical paths with phase difference and interference appears (), a result expected for a wave. The WPI is partially washed out, up to be totally erased when . Hence, the decision to apply a voltage to the EOM after the photon left BS but before it passes BS, influences the behavior of the photon in the past and changes the representation of the photon from a particle to a wave [9].

Fig. 5.: Schematic diagram of the experimental setup of Wheeler’s delayed-choice experiment with single photons. [9, 76] : single-photon source; PBS: polarizing beam splitter; HWP: half-wave plate; EOM: electro-optic modulator; RNG: random number generator; WP: Wollaston prism (= PBS); and : detectors; P, S: polarization state of the photons; : phase shift between paths 0 and 1. The diagram is that of a Mach-Zehnder interferometer composed of a 50/50 input beam splitter (BS) and a variable output beam splitter (BS) with adjustable reflectivity .
5.3.1 Event-based model

We construct a model for the messengers representing the linearly polarized photons and for the processing units representing the optical components in the experimental setup (see Fig. 5) thereby fulfilling the requirements that the processing units for identical optical components should be reusable within the same and within different experiments and that the network of processing units is a one-to-one image of the experimental setup. Although, in contrast to the experiments we have considered so far, in this experiment it is necessary to include the polarization in the model for the messengers representing the photons. These more general messengers can also be used in a simulation of the experiments discussed previously. In the event-based simulation of these experiments the polarization component of the message is simply not used in the DLMs modeling the optical components of their experimental setup. In what follows we describe the elements of the model in more detail.

  • Source and particles: The polarization can be included in the model for the messengers representing the photons by adding to the message a second harmonic oscillator which also vibrates with frequency . There are many different but equivalent ways to define the message. As in Maxwell’s and quantum theory, it is convenient (though) not essential to work with complex valued vectors, that is with messages represented by two-dimensional unit vectors


    where , for . The angle determines the relative magnitude of the two components and , denotes the phase difference between the two components. Both and determine the polarization of the photon. Hence, the photon can be considered to have a polarization vector . The third degree of freedom in Eq. (\theequation) is used to account for the time of flight of the photon. Within the present model, it is thus postulated that the state of the photon is fully determined by the angles , and and by rules (to be specified), by which these angles change as the photon travels through the network.

    A messenger with message at time and position that travels with velocity , where denotes the velocity of light and is the index of refraction of the material, along the direction during a time interval , changes its message according to for , where . This suggests that we may view the two-component vectors as the coordinates of two local oscillators, carried along by the messengers and that the messenger encodes its time of flight in these two oscillators.

    It is evident that the representation used here maps one-to-one to the plane-wave description of a classical electromagnetic field, [42] except that we assign these properties to each individual photon, not to a wave. As there is no communication/interaction between the messengers there can be no wave equation (partial differential equation) that enforces a relation between the messages carried by different messages.

    When the source creates a messenger, its message needs to be initialized. This means that the three angles , and need to be specified. The specification depends on the type of light source that has to be simulated. For a coherent light source, the three angles are different but the same for all the messengers being created. Hence, three random numbers are used to specify , and for all messengers.

    In this section we will demonstrate explicitly that in the event-based model (in general, not only for this experiment) photons always have full WPI even if interference is observed by giving the messengers one extra label, the path label having the value 0 or 1. The information contained in this label is not accessible in the experiment. [76] We only use it to track the photons in the network of processing units. The path label is set in the input BS and remains unchanged until detection. Therefore we do not consider this label in the description of the processing units but take it into account when we detect the photons.

  • Polarizing beam splitter (PBS): A PBS is used to redirect photons depending on their polarization. For simplicity, we assume that the coordinate system used to define the incoming messages coincides with the coordinate system defined by two orthogonal directions of polarization of the PBS.

    In general, a PBS has two input and two output channels labeled by 0 and 1, just like an ordinary BS (see Sect. 5.2.1). Note that in case of Wheeler’s delayed choice experiment, the first PBS has only one input channel labeled by and therefore the second PBS has only one output channel labeled by . In the event-based model, the PBS has a similar structure as the BS. Therefore, in what follows we only mention the main ingredients to construct the processing unit for the PBS. For more details we refer to Sect. 5.2.1.

    The PBS has two internal registers with for representing a complex number, and an internal vector , where for , and denotes the message number. Before the simulation starts uniform pseudo-random numbers are used to set , and .

    When the th messenger carrying the message arrives at entrance port or of the PBS, the PBS first copies the message in the corresponding register and updates its internal vector according to


    where and () represents the arrival of the th messenger on channel (). Note that the DLM has storage for exactly ten real-valued numbers.

    Next the PBS uses the information stored in , and to calculate four complex numbers


    and generates a uniform random number between zero and one. If , the PBS sends a message


    through output channel 1. Otherwise it sends a message


    through output channel 0.

  • Half-wave plate (HWP): A HWP not only changes the polarization of the light but also its phase. In optics, a HWP is often used as a retarder. In the event-based model, the retardation of the wave corresponds to a change in the time of flight (and thus the phase) of the messenger. In contrast to the BS and PBS, a HWP may be simulated without DLM. The device has only one input and one output port (see Fig. 5). A HWP transforms the th input message into an output message