Interfering trajectories in experimental quantum-enhanced stochastic simulation
Simulations of stochastic processes play an important role in the quantitative sciences, enabling the characterisation of complex systems. Recent work has established a quantum advantage in stochastic simulation, leading to quantum devices that execute a simulation using less memory than possible by classical means. To realise this advantage it is essential that the memory register remains coherent, and coherently interacts with the processor, allowing the simulator to operate over many time steps. Here we report a multi-time-step experimental simulation of a stochastic process using less memory than the classical limit. A key feature of the photonic quantum information processor is that it creates a quantum superposition of all possible future trajectories that the system can evolve into. This superposition allows us to introduce, and demonstrate, the idea of comparing statistical futures of two classical processes via quantum interference. We demonstrate interference of two 16-dimensional quantum states, representing statistical futures of our process, with a visibility of .
Many of the most interesting phenomena are complex—whether in urban design, meteorology or financial prediction, the systems involved feature a vast array of interacting components. Predicting and simulating such systems often requires the use of a prohibitive amount of data, evincing a pressing need for more efficient tools in algorithmic modelling and simulation.
Quantum technologies have shown the potential to dramatically reduce the amount of working memory required to simulate stochastic processes Gu2012 (); Mahoney2016 (). By tracking information about past observations directly within quantum states, a quantum device can replicate the system’s conditional future behaviour, using less memory than the provably optimal classical limits. The key to achieving a quantum memory advantage is maintaining coherence of the quantum memory during the simulation process, enabling the encoding of relevant past information into non-orthogonal quantum states. This memory reduction comprises a new application of quantum processing, complementary to computational speedup Lloyd1996 (), cryptography Bennett1984 (), sensing Giovannetti2004 (); Slussarenko2017 () and phase estimation Xiang2011 ().
This advantage was first illustrated for simulating a particular stochastic process, where past information was encoded within non-orthogonal polarisation states of a single photon Palsson2016 (). The scheme, however, maintained quantum coherence over only a single simulation cycle. This limitation meant that the resulting simulator exhibited a memory advantage only when simulating a single time step. To simulate multiple time steps, such a device required relevant information to be transferred to classical memory between time steps, negating any quantum advantage.
Here we develop a quantum simulator that overcomes this limitation, such that it exhibits a memory advantage when simulating multiple time steps. As an important additional benefit, our device enables us to create a quantum superposition over all potential future outcomes of a process. We illustrate that such an output lets us estimate the distinguishability in the statistical futures of two stochastic systems via quantum interference. Our experimental approach makes use of temporal (time-bin) encoding in an optical system to experimentally realise a quantum simulation over three consecutive steps, generating a coherent superposition over the process’s potential future trajectories. We then implement two such quantum simulations in parallel, simultaneously generating superpositions over the trajectories for each of two independent systems. Experimentally, this corresponds to using our quantum simulators to produce and control high-dimensional quantum states. These are interfered, allowing estimation of how well the corresponding statistical futures coincide.
Framework and tools
In this work, we study a simple stochastic process known as the perturbed coin Gu2012 (). It consists of a binary random variable that represents the state of a coin (0 corresponds to heads, and 1 to tails) inside a box. At each time step, the box is perturbed, causing the coin to flip with some probability. Afterwards, the state of the coin is emitted. In general the coin may be biased, so the probability of remaining in heads, , can differ from the probability of remaining in tails, , as presented in Fig. 1. Repetition of this procedure generates a string of 0s and 1s, whose statistics define the perturbed coin process.
Any device that seeks to replicate correct future statistics must retain relevant past information in a memory. This involves a prescription for configuring its memory in an appropriate state for each possible observed past, such that systematic actions on this memory recover a sequence of future outputs that are faithful to conditional future statistics. The amount of past information stored in memory is quantified by the Shannon entropy , where is the probability that the memory is in state and the logarithm is in base 2. The minimal possible memory required, , is known as the statistical complexity, and is an important measure of structure in complexity science Grassberger1986 (); Crutchfield1989 (); Shalizi2001 (); Crutchfield2009 (). For the perturbed coin (Fig. 1), the minimal information required about the past is the current state of the coin. This induces a statistical complexity of , where represents the probability that the last outcome was heads (see Eq. (1) in Methods).
A quantum simulator can further reduce memory requirements by encoding the two possible outcomes of the process into mutually non-orthogonal states. Future statistics are then generated by a series of unitary interactions, ensuring that this entropic advantage is maintained at all times during simulation Binder2018 (). For the case of the perturbed coin, the quantum simulator can be implemented as shown in Fig. 2. The state of the machine encodes relevant information about past outcomes—here, the state of the coin after the last step. It is represented as one of two states, or , of a quantum system that sequentially interacts with ancillary systems. Each interaction corresponds to a time step of the stochastic process. All the ancillary systems start in a fixed state, and therefore do not contain any information. The sequence of interactions produces an entangled state. Measuring the ancillary systems after the desired number of steps provides a sample of the statistics.
Motivated by recent realisations of quantum walks in linear optical setups with time-bin encoding Schreiber2010 (); Schreiber2012 (); Jeong2013 (); Boutari2016 (), we implement the memory system and multiple ancillas—here, corresponding to three time steps—by encoding on a single photon. The ancillas, which can be read to obtain the classical outcomes of the process, are encoded in the arrival time of the photon, and the memory state of the simulator is encoded in its polarisation. Thus, for a simulation of time steps, a -dimensional system corresponding to different photon arrival times replaces distinct ancillary photons. Instead of measuring the classical outcome at each time step, our quantum information processor keeps the photon and builds up a superposition in a high-dimensional Hilbert space; in our case , and the output of the simulator is 16 dimensional (8 arrival time modes 2 polarisation modes). The associated memory cost during this process does not increase since all operations remain unitary—and thus conserve entropy. Of course, distinct ancilla qubits could be used instead, but encoding in multiple degrees of freedom provides a convenient, effective and high-fidelity approach for small- to medium-sized photonic systems.
Our experiment demonstrates that high-dimensional (here 16-dimensional) quantum states can be encoded and manipulated in photonic temporal and polarisation modes with high fidelity Franson1989 (); Kwiat1993 (). This complements other related works involving hybrid optical states using spatial (path and optical orbital angular momentum) and polarisation modes takeuchi2000experimental (); ma2009experimental (); nagali2010generation (); Zhang2016 (). It also substantiates the oft-repeated claim that combining different photonic encodings Kwiat1997 (); Barreiro2005 () is a practical tool for various quantum information tasks, for example studying the remote preparation of entangled states barreiro2010remote (), complementarity nogueira2010interference (), Bell inequalities valles2014generation (); ma2009experimental (); dada2011experimental (), quantum key distribution implementations takemoto2015quantum () and complete optical Bell state analysers walborn2003hyperentanglement (); wei2007hyperentangled ().
Our first task consists of performing the quantum simulation of the perturbed coin. In particular, we seek to verify that the simulator samples from the correct statistical distributions, and to demonstrate the memory advantage due to quantum encoding. The experimental setup is shown in Fig. 3. We generate degenerate pairs of single photons through spontaneous parametric down-conversion. One of the photons (depicted as the red, lower beam in the figure) is prepared in the state or , depending on the past of the process. It then passes through three sequential blocks, which represent the three time steps being simulated. In each block, the short and long paths correspond to outcomes and , respectively (details in Methods). For the simulation, only one of the photons (the red beam) is used, and the other photon (orange beam in the figure) is not used except as a herald, and is measured immediately after generation (for this task, it does not go through the apparatus as shown in the figure). We then estimate the polarisation state of the red-beam photon in the tomographic reconstruction at the end of the third block, and also measure its arrival time (using the orange-beam photon as a reference). In this way, we obtain the probability distribution of the stochastic process as simulated by our quantum information processor, together with the final memory state of our simulator, which is needed for further simulation steps.
The experimentally determined outcome probabilities are shown in Fig. 4, and are close to the expected theoretical values. The main discrepancies with theory are due to small differences between nominally identical polarisation elements, and the non-identical single-mode-fibre coupling efficiency of photons taking different paths through the simulator. In order to evaluate how well they agree, we calculate the (classical) fidelity Book-Nielsen2010 () for each set of parameters and initial conditions that we have simulated in our experiment. All the values obtained for this (classical) fidelity are larger than . Typical uncertainties are around .
To compare the use of quantum and classical resources, we use , the quantum counterpart to the classical statistical complexity (the entropy of the memory register of the quantum simulator), which quantifies the memory requirement of the quantum simulator. We thus calculate for this process (details in Methods). The experimental results are shown in Fig. 5a. The corresponding classical statistical complexity is also shown for the sake of comparison, demonstrating that quantum resources dramatically reduce the amount of memory needed for simulating a multi-step stochastic process.
To guarantee that the quantum memory advantage is maintained at all stages of the simulation process, we require the internal dynamics to be close to (ideally, completely) unitary. We can verify this by demonstrating the coherence of the output state that includes all the ancillary qubits and the memory state of the simulator. We observe this coherence via two-photon quantum interference. We use the complete setup of Fig. 3, where the photon depicted by the orange path is no longer measured after generation (as done previously), but also goes through the apparatus. Both the photons pass independently through the three sequential blocks, with each experiencing nominally the same optical elements (although different settings are possible). If the coherence between the different time bins and polarisations exploited in our simulation is maintained, we expect a complete interference, which means that the visibility ideally should be unity. The result in Fig. 5b shows a visibility of for the case where the theoretical output states of the apparatus are uniform superpositions of all time bins and polarisations (which is the scenario where the highest discrepancy from the ideal visibility would be expected as it is most susceptible to imperfections). The high value obtained here indicates that our simulator is (almost) implementing a unitary operator, and the entropy of our system does not significantly increase throughout the simulation process. This requirement is essential for preserving the quantum memory advantage. Moreover, apart from the specific application of this apparatus to simulate classical stochastic processes, this result is also significant in a more general context, since it demonstrates the interference of two discrete high-dimensional states with an extremely high visibility Zhang2016 ().
Modifying this experimental setup allows us to compare two different processes, and . Clearly, one way to perform such a statistical comparison is to consider each process individually, and sample its outcomes to reconstruct the corresponding distribution. These two reconstructed distributions can then be compared. However, we notice that in our quantum simulation, all the information about the future statistics is already encoded in the state that exists in our apparatus. Thus, we do not need to collapse the superposition of possible outcomes by sampling, instead we can exploit this superposition for our task of comparing the future of processes. In particular, by simultaneously running quantum simulations of processes and in parallel and interfering the resulting output states, we can estimate the overlap of their future statistics.
In our experiment, we realise different processes by applying different operations to the two photons (red beam and orange beam) in the three blocks of the setup in Fig. 3. To implement the parameters of each process separately, we use half-wave plates with holes, which allow us to change the polarisation of one beam without affecting the other. We fix one of the processes and change the other process gradually. As the parameters defining the processes become increasingly similar, the two output probability distributions overlap more. This is reflected in the experiment by a higher visibility value, showing how the comparison between two sets of future statistics can be evaluated via interference visibility. Results are shown in Fig. 5c, where the experimental values are close to theoretical predictions. However, there remain slight discrepancies because of experimental imperfections such as small spatial and polarisation mode mismatches. These techniques could be adapted to attain a quantum advantage in estimating the distance between two normalised vectors Kumar2017 (), which plays an essential role in machine learning tasks such as image recognition Book-Shalev-Shwartz2014 ().
Our multi-step photonic implementation of a stochastic simulation has verified the memory advantage available with quantum resources. We have demonstrated that it is possible to maintain this advantage at all stages of the simulation by preserving quantum coherence, as opposed to previous experiments Palsson2016 (); Jouneghani2017 (); Ghafari2018 (). Further, we have shown that superpositions of process outcomes can be interfered. These techniques have the potential to reduce memory requirements in simulations of stochastic processes and to provide tools for advances in quantum machine learning and communication complexity.
The time-bin-encoding techniques in our experiment can be extended to other small- and medium-scale simulations by expanding the number of time bins. For example, time-bin modes have been realized in the context of communication complexity Xu2015 (). However, the number of bins does not scale efficiently with the number of qubits, and thus very-large-scale simulations are not possible with this encoding. This is not a fundamental problem, as the concepts that we demonstrate can be equivalently implemented in other photonic encodings or in other qubit systems. Our current demonstration also uses non-deterministic (post-selected) mode recombination at certain beam splitters within the circuit. This implementation is convenient, but not necessary and thus not a fundamental limitation: a deterministic multi-step simulator could be realised with a step-dependent delay mechanism— for instance, a controlled fast switch connected to fibre paths of different lengths.
The comparison of future statistics has direct relation to other protocols, such as quantum fingerprinting and state comparison in communication complexity Xu2015 (); Kumar2017 (). Fingerprinting involves estimating the distance between two vectors, where the resource to be minimised is the amount of communication. For the comparison of two vectors, quantum mechanics can reduce the amount of communication required beyond classical limits. In the quantum protocol, Alice and Bob perform a SWAP test —a quantum information primitive which compares two arbitrary states. Two-photon interference is known to be equivalent to a SWAP test Garcia-Escartin2013 (). Our comparison of futures can be cast as a similar problem. In this case, the task would be for Alice and Bob, who each have their future statistics from potentially different processes, to compare the two statistical futures Kumar2017 (). In principle, for very high-dimensional Hilbert spaces, a comparison of statistical futures via two-photon interference can achieve a quantum advantage in communication complexity. The comparison of two vectors is also an important component of many machine learning tasks, and thus a similar advantage could extend to more general settings like speech recognition Book-Shalev-Shwartz2014 ().
A discrete-time stochastic process is generally described by a joint probability distribution, , where () denotes the random variables that govern the statistics of past (future) observations. Each past (future) configuration of the random process is denoted by (). For an observed past configuration , the future statistics are dictated by the conditional probability , which we abbreviate as .
By categorising all sets of past events with the same future statistics into equivalence classes (called causal states, which are encoded as memory states of the simulator), the optimal classical model (called the -machine Crutchfield1989 (); Crutchfield1994 ()) only needs to store the class that belongs to. That is, given only the -machine is able to make a statistically accurate inference of the process’ conditional future. By observing the outcome of the stochastic process over a long time, one can infer the probability of each causal state and transition probabilities between them. For a stochastic process, the causal states and their relevant transition probabilities are enough to realise the -machine model. The resulting -machine requires Book-Nielsen2010 ()
bits of information about the past, where is the probability that the past is in causal state . No other predictive model can simulate the future while storing less information about the past. Thus has been termed the statistical complexity Zambella1988 (); Crutchfield1989 (); Crutchfield2012 (), and is considered a fundamental measure of complexity that captures how resource-intensive it is to predict the future of a given process.
It has been theoretically proven that for many processes, including the one studied here, there exists a quantum -machine with entropy , such that Gu2012 (). Similar to its classical counterpart, this quantum model is defined by its causal states and the corresponding transition probabilities. On average, the entropy of such a quantum -machine is given by
Three-step simulation of a perturbed coin
For the perturbed coin process, the optimal quantum causal states can be written as Gu2012 ():
To give an example of the output state of our simulator, let us consider a perturbed coin defined by its parameters and , which we denote as process . The output of the corresponding quantum -machine after three time steps is given by the superposition
where and is the probability to obtain , , and as the outcomes of three time steps of the process when the input causal state is . The value of can be evaluated theoretically from the transition probabilities between causal states (Fig. 1). The variables are the configurations of random variables , , and , respectively. To sample from the future statistics of the perturbed coin process, we perform a simultaneous measurement of all the ancillary qubits after the three time steps. By also characterising the polarisation state of the photon in each case, we can tomographically reconstruct the output state associated with each time bin, and thus experimentally determine the statistical complexity of the simulation. To calculate the statistical complexity, , for this process, we need to find the state :
, , and is the tomographically reconstructed polarisation state at each arrival time, conditioned on the input memory state being encoded in .
Verifying the unitarity of the processor via two-photon quantum interference
To verify that the operation is unitary, which guarantees the conservation of the entropy, we need to show that the superposition of different modes, both in time and polarisation, is coherent and that this coherence is maintained throughout the whole process. Using a pure state as the input and viewing the entire simulation as a black box, the output of the unitary operations inside the box should ideally be a pure state. In order to experimentally demonstrate this, we consider the case of simultaneously implementing two setups to model two identical processes, . It is possible to verify that two uncorrelated single photons are in identical pure states via two-photon interference—the Hong-Ou-Mandel (HOM) effect. The visibility of the interference, , where () is the maximum (minimum) of two-photon coincidence detections measured when varying the delay between the two beams, can only be unity if the photons are in pure and identical states.
Comparison of future statistics
The case of unequal processes also provides useful information. If and the output states are pure, the overlap of different future output statistics can be deduced by interfering the output photons. For two photons in states and entering two input ports of a beam splitter, the probability of finding a coincidence is , where is the overlap of the two states. Therefore, one can use the HOM interference visibility to estimate overlaps, by noting that . For our stochastic processes, the overlaps of the photonic output states are directly related to the overlaps of the future statistics produced by the two processes. For two different processes and , let be a causal state of , and be a causal state of . Using Eq. (5), in general the overlap between the respective outputs of the quantum simulators for and will be
Since the perturbed coin process has Markov order one, and there is a one-to-one correspondence between the classical outcome and the causal state the machine transitions to, interfering the output states from a pair of quantum simulators for and as in Eq. (7), actually results in an overlap
That is, in this special case we are able to find the difference between conditional futures up to one additional time step. Therefore, we can use our photonic quantum information processor for two tasks: (1) to simulate the future outcomes over three time steps of the classical stochastic process, and (2) to estimate the overlap of the future output statistics over four time steps.
Details of the experimental design
The schematic in Fig. 3 shows how we implement the multi-step quantum-enhanced stochastic processor. Consider, for instance, the scenario where we want to sample the statistics. For one process (i.e. for one beam) a single photon is injected in the left-hand side of the circuit, from the source, with the state which is encoded as horizontal (vertical) polarisation. The first wave plate creates the desired initial causal state of our perturbed coin, either or . The purpose of the first block is to transform a photon with a causal state encoded in polarisation into an appropriately weighted superposition of the classical outcomes of the first step encoded in the arrival time (denoted here as the delay degree of freedom, del), with the corresponding next causal state encoded in the polarisation:
This is achieved by temporarily using the photon path as an auxiliary degree of freedom: A polarising beam splitter maps the polarisation degree of freedom onto the path, which is then copied onto the arrival time through the use of different path lengths ( and ). By using a wave plate in each of the two paths, a path-dependent (and therefore, arrival-time-dependent) transformation of the polarisation into one of the two causal states is achieved: in the short path, and in the long path.
Next, the information on the path degree of freedom is erased, to avoid an exponential scaling of the number of paths (and optical elements in the experiment) with the number of time steps. To this end, the paths are recombined in a beam splitter, and subsequently post-selected for the photon exiting in the right output arm at the end of the first block (Fig. 3). This means that we will lose half of our photons at the beam splitter, but in each run that we post-select, the evolution is unitary because the post-selection ensures that no photon is detected in the other output arm. By repeating the described block at each time step, we have three blocks to realise a three-step machine. The use of a sequence of interferometers has also been demonstrated in other experiments to study different topics in quantum information, such as non-Markovian dynamics and sequential state discrimination chiuri2012linear (); nagali2012testing ().
To be able to attribute a different arrival time to each sequence of classical outcomes, we require a unique path length for every possible combination of short or long paths within the three blocks. The delays are implemented as ns at the first step, ns at the second, ns at the third step. The arrival times are discriminated by time-resolving single-photon detectors. The coincidence window for HOM interference is long enough to include the state which is spread out in a 14 ns time interval.
After the third step, we have the measurement stage at one output arm of the third BS and the circuit continues at the other, which is exploited for the second task of our work. In order to run our simulation and estimate the memory efficiency of this scheme compared to the optimal classical one, we measure the final arrival times (encoding the three ancillary qubits of the original scheme) and reconstruct the final polarisation state of the photon. This can be done simultaneously at the tomography stage, by also measuring the arrival times of the photons, allowing a full reconstruction of the polarisation state and arrival time.
The same apparatus can be exploited for the interference part of our experiment, the only difference being that now two single photons are injected in the setup. They both pass through the three blocks described above. When we want to verify the unitarity of our simulation, the elements in the blocks are the same for both the photons, so as to have ; on the other hand, they are different when we want to compare the future statistics of two different processes (). After the output of the third block, the two photons interfere in a fibre BS and the number of coincidences is measured.
Details of the and parameters used in the experiment
The simulated process, for which the results are depicted in Fig. 5a, is a perturbed coin with parameters and ranging from to in increments of . Due to experimental imperfections the actual implemented values of and slightly deviate from the nominal ones ( and ). In Fig. 5c, the turquoise and magenta colours both show the case of two processes. For the turquoise graph, the fixed process is a stochastic process of a perturbed coin with input causal state , , and . The varying stochastic processes are the ones with input causal state , , and nominal (the parameter is used to change between different processes). For the magenta graph, the fixed stochastic process is a perturbed coin with input causal state , , and . The varying ones are the stochastic processes with input causal state , , and nominal .
- (1) Gu, M., Wiesner, K., Rieper, E. & Vedral, V. Quantum mechanics can reduce the complexity of classical models. Nat. Commun. 3, 762 (2012).
- (2) Mahoney, J. R., Aghamohammadi, C. & Crutchfield, J. P. Occam’s quantum strop: Synchronizing and compressing classical cryptic processes via a quantum channel. Sci. Rep. 6, 20495 (2016).
- (3) Lloyd, S. Universal quantum simulators. Science 273, 1073–1078 (1996).
- (4) Bennett, C. H. & Brassard, G. Quantum cryptography: Public key distribution and coin tossing. Proc. IEEE International Conference on Computers, Systems, and Signal Processing. 175 (1984).
- (5) Giovannetti, V., Lloyd, S. & Maccone, L. Quantum-enhanced measurements: Beating the standard quantum limit. Science 306, 1330–1336 (2004).
- (6) Slussarenko, S. et al. Unconditional violation of the shot-noise limit in photonic quantum metrology. Nat. Photonics 11, 700–703 (2017).
- (7) Xiang, G. Y., Higgins, B. L., Berry, D. W., Wiseman, H. M. & Pryde, G. J. Entanglement-enhanced measurement of a completely unknown optical phase. Nat. Photonics 5, 43–47 (2011).
- (8) Palsson, M. S., Gu, M., Ho, J., Wiseman, H. M. & Pryde, G. J. Experimentally modeling stochastic processes with less memory by the use of a quantum processor. Sci. Adv. 3, e1601302 (2017).
- (9) Grassberger, P. Toward a quantitative theory of self-generated complexity. Int. J. Theor. Phys. 25, 907–938 (1986).
- (10) Crutchfield, J. P. & Young, K. Inferring statistical complexity. Phys. Rev. Lett. 63, 105 (1989).
- (11) Shalizi, C. R. & Crutchfield, J. P. Computational mechanics: Pattern and prediction, structure and simplicity. J. Stat. Phys. 104, 817–879 (2001).
- (12) Crutchfield, J. P., Ellison, C. J. & Mahoney, J. R. Time’s barbed arrow: Irreversibility, crypticity, and stored information. Phys. Rev. Lett. 103, 094101 (2009).
- (13) Binder, F. C., Thompson, J. & Gu, M. Practical unitary simulator for non-Markovian complex processes. Phys. Rev. Lett. 120, 240502 (2018).
- (14) Schreiber, A. et al. Photons walking the line: A quantum walk with adjustable coin operations. Phys. Rev. Lett. 104, 050502 (2010).
- (15) Schreiber, A. et al. A 2D quantum walk simulation of two-particle dynamics. Science 336, 55–58 (2012).
- (16) Jeong, Y.-C., Di Franco, C., Lim, H.-T., Kim, M. S. & Kim, Y.-H. Experimental realization of a delayed-choice quantum walk. Nat. Commun. 4, 2471 (2013).
- (17) Boutari, J. et al. Large scale quantum walks by means of optical fiber cavities. J. Opt. 18, 094007 (2016).
- (18) Franson, J. D. Bell inequality for position and time. Phys. Rev. Lett. 62, 2205 (1989).
- (19) Kwiat, P. G., Steinberg, A. M. & Chiao, R. Y. High-visibility interference in a Bell-inequality experiment for energy and time. Phys. Rev. A 47, R2472 (1993).
- (20) Takeuchi, S. Experimental demonstration of a three-qubit quantum computation algorithm using a single photon and linear optics. Phys. Rev. A 62, 032301 (2000).
- (21) Ma, X.-s., Qarry, A., Kofler, J., Jennewein, T. & Zeilinger, A. Experimental violation of a Bell inequality with two different degrees of freedom of entangled particle pairs. Phys. Rev. A 79, 042101 (2009).
- (22) Nagali, E. & Sciarrino, F. Generation of hybrid polarization-orbital angular momentum entangled states. Opt. Express 18, 18243–18248 (2010).
- (23) Zhang, Y. et al. Engineering two-photon high-dimensional states through quantum interference. Sci. Adv. 2, e1501165 (2016).
- (24) Kwiat, P. G. Hyper-entangled states. J. Mod. Opt. 44, 2173–2184 (1997).
- (25) Barreiro, J. T., Langford, N. K., Peters, N. A. & Kwiat, P. G. Generation of hyperentangled photon pairs. Phys. Rev. Lett. 95, 260501 (2005).
- (26) Barreiro, J. T., Wei, T.-C. & Kwiat, P. G. Remote preparation of single-photon âhybridâ entangled and vector-polarization states. Phys. Rev. Lett. 105, 030407 (2010).
- (27) Nogueira, W. et al. Interference and complementarity for two-photon hybrid entangled states. Phys. Rev. A 82, 042104 (2010).
- (28) Vallés, A. et al. Generation of tunable entanglement and violation of a Bell-like inequality between different degrees of freedom of a single photon. Phys. Rev. A 90, 052326 (2014).
- (29) Dada, A. C., Leach, J., Buller, G. S., Padgett, M. J. & Andersson, E. Experimental high-dimensional two-photon entanglement and violations of generalized Bell inequalities. Nat. Phys. 7, 677-680 (2011).
- (30) Takemoto, K. et al. Quantum key distribution over 120 km using ultrahigh purity single-photon source and superconducting single-photon detectors. Sci. Rep. 5, 14383 (2015).
- (31) Walborn, S., Pádua, S. & Monken, C. Hyperentanglement-assisted Bell-state analysis. Phys. Rev. A 68, 042313 (2003).
- (32) Wei, T.-C., Barreiro, J. T. & Kwiat, P. G. Hyperentangled Bell-state analysis. Phys. Rev. A 75, 060305 (2007).
- (33) Nielsen, M. A. & Chuang, I. L. Quantum computation and quantum information (Cambridge Univ. Press, New York, 2010).
- (34) Kumar, N., Diamanti, E. & Kerenidis, I. Efficient quantum communications with coherent state fingerprints over multiple channels. Phys. Rev. A 95, 032337 (2017).
- (35) Shalev-Shwartz, S. & Ben-David, S. Understanding machine learning: From theory to algorithms (Cambridge Univ. Press, New York, 2014).
- (36) Ghafari, F. et al. Observing the ambiguity of simplicity via quantum simulations of an Ising spin chain. Preprint at http://arxiv.org/abs/1711.03661 (2017).
- (37) Ghafari, F. et al. Single-shot quantum memory advantage in the simulation of stochastic processes. Preprint at http://arxiv.org/abs/1812.04251 (2018).
- (38) Xu, F. et al. Experimental quantum fingerprinting with weak coherent pulses. Nat. Commun. 6, 8735 (2015).
- (39) Garcia-Escartin, J. C. & Chamorro-Posada, P. SWAP test and Hong-Ou-Mandel effect are equivalent. Phys. Rev. A 87, 052330 (2013).
- (40) Crutchfield, J. P. The calculi of emergence: Computation, dynamics and induction. Physica D 75, 11–54 (1994).
- (41) Zambella, D. & Grassberger, P. Complexity of forecasting in a class of simple models. Complex Syst. 2, 269–303 (1988).
- (42) Crutchfield, J. P. Between order and chaos. Nat. Phys. 8, 17–24 (2012).
- (43) Chiuri, A., Greganti, C., Mazzola, L., Paternostro, M. & Mataloni, P. Linear optics simulation of quantum non-Markovian dynamics. Sci. Rep. 2, 968 (2012).
- (44) Nagali, E. et al. Testing sequential quantum measurements: How can maximal knowledge be extracted? Sci. Rep. 2, 443 (2012).
We thank Raj B. Patel for helpful contributions. This research was funded, in part, by the Australian Research Council (project no. DP160101911), the Lee Kuan Yew Endowment Fund (Postdoctoral Fellowship), Singapore Ministry of Education Tier 1 grant RG190/17, Singapore National Research Foundation Fellowship NRF-NRFF2016-02, and NRF-ANR grant NRF2017-NRF-ANR004 VanQuTe, and the FQXi Large Grant: The role of quantum effects in simplifying adaptive agents. F.G. acknowledges support by the Australian Government Research Training Program (RTP) scholarship. We acknowledge the traditional owners of the land on which this work was undertaken at Griffith University, the Yuggera people.
FG, NT and CDF designed the experimental setup; FG and NT performed the experiment and analysed the data. CDF, MG and JT conducted the theory of the project, as well as contributing to the data analysis. GJP played a significant role in the project conceptualisation, provided experimental assistance, and oversaw all aspects of the project. All authors contributed to writing the manuscript.