Observation of Majorization Principle for quantum algorithmsvia 3-D integrated photonic circuits

Observation of Majorization Principle for quantum algorithms via 3-D integrated photonic circuits

Abstract

The Majorization Principle is a fundamental statement governing the dynamics of information processing in optimal and efficient quantum algorithms. While quantum computation can be modeled to be reversible, due to the unitary evolution undergone by the system, these quantum algorithms are conjectured to obey a quantum arrow of time dictated by the Majorization Principle: the probability distribution associated to the outcomes gets ordered step-by-step until achieving the result of the computation. Here we report on the experimental observation of the effects of the Majorization Principle for two quantum algorithms, namely the quantum fast Fourier transform and a recently introduced validation protocol for the certification of genuine many-boson interference. The demonstration has been performed by employing integrated 3-D photonic circuits fabricated via femtosecond laser writing technique, which allows to monitor unambiguously the effects of majorization along the execution of the algorithms. The measured observables provide a strong indication that the Majorization Principle holds true for this wide class of quantum algorithms, thus paving the way for a general tool to design new optimal algorithms with a quantum speedup.

Introduction — Quantum computation holds the promise to greatly improve the capabilities of computational platforms relying on the laws of classical physics (1). Such potentiality arises from the combination of both an exponential storage capability and a dynamical parallel processing of the unitary time evolutions. However, the unprecedented massive computational resource offered by the parallel processing alone is doomed to failure, due to the non-deterministic nature of any measurement process. Thus, quantum algorithms have been properly tailored to exploit the power hidden in quantum resources challenging this limitation (2). While quantum correlations are considered to be the fundamental physical resource responsible for the higher efficiency of quantum algorithms, we still ignore how to manage them to effectively produce new quantum protocols (3). This situation is in sharp contrast with the theory of classical algorithms, where there exist well-known strategies to devise new algorithms starting from those already available (5). However, while a complete picture of the principles governing the design of new quantum algorithms is still lacking, we can yet control the evolution to guarantee that a new alleged quantum algorithm is truly efficient. Such criterion may arguably be provided by the Majorization Principle (MP) (6); (7); (8). So far, indeed, all time evolutions ultimately belong to two categories. Classical evolutions can be described by the Principle of Least Action (9): trajectories must obey local constraints so that the Action remains stationary at each point of the geodesic. On the contrary, a global description for quantum evolutions requires to sum over all classical trajectories weighted by the exponential of the phase-Action (10). MP-constrained evolutions represent a synthesis of these two typologies, lying between classical observables and purely quantum processing. Specifically, the MP is believed to provide a necessary condition that must hold to produce an optimal algorithm with a quantum speedup.

Here we report on the experimental observation of the Majorization Principle, acting along the execution of both a Quantum Fourier Transform (QFT) routine and a recently introduced quantum validation protocol aimed at certifying genuine many-boson interference (11). The importance of these two operations in the context of quantum algorithms make them a perfect test-bed for the experimental demonstration of the occurrence of the MP. The observation has been carried out by implementing the protocols on 3D integrated photonic circuits realized via femtosecond laser writing technique on alumino-borosilicate substrates (12); (13); (14); (15). This fabrication procedure presents the unique advantage of permitting interferometric architectures with 3-dimensional topology. The latter feature enabled the capability of decomposing the action of the two protocols into discrete steps, through which it has been possible to observe the MP.

Figure 1: Majorization Principle in quantum Fourier transform. (a) Conceptual scheme for the experimental observation of the Majorization Principle in (b) an 8-dimensional quantum Fourier transform: by implementing the QFT with its fast architecture, it is possible to decompose the evolution in three steps, corresponding to the three layers of beamsplitters (purple) and phase shifters (blue) between which a step-by-step majorization is observed.

The Majorization Principle arises with respect to the probability distributions of all possible outcomes of an algorithm, updated while advancing throughout each step. Given two probability distributions (, ), let (, ) be the same vectors with their components sorted in decreasing order. We say that majorizes () if and only if

(1)

The concept of majorization can be extended in a natural way to quantum algorithms. Let represent the state of the register of a quantum computer at a step s of the algorithm. We can associate to a vector of probabilities by writing the register in the computational basis , in such a way that . Consequently, a quantum algorithm is said to undergo a direct (reversed) majorization if and only if () for all steps (7). An intuitive reason for the physical connection between quantum processing and direct (reversed) majorization is that of a neat flux of probability towards (away from) the result of the computation, making the probability distribution steeper (flatter) throughout the whole algorithm.
The principle can now be stated as follows (6):

Majorization Principle: In all optimal and efficient quantum algorithms, the set of sorted probabilities associated to the quantum register must obey either a direct or a reverse step-by-step majorization.

All known quantum algorithms which are both optimal and efficient, i.e. with a quantum speedup over the best classical algorithm, have been proven to satisfy the conjectured MP with a direct or reverse majorization (8). Remarkably, similar majorization constraints have already found applications in highlighting arrows in several other physical processes (16); (17). Similarly, the MP promises to represent the arrow which operates within optimal and efficient quantum algorithms.

The validity of the principle has been proved theoretically for both Grover-like (18) and phase estimation-like (19) algorithms. Further optimal algorithms studied include a variant of the Berstein-Vazirani algorithm (20), a set of quantum adiabatic algorithms (21) and a quantum random walk algorithm (22). For all such instances, quantum speedups over the classical state of the art were always found to be associated to a step-by-step majorization, while non-efficient computations did not. The case of the Berstein-Vazirani algorithm is of even greater interest, since no entanglement is created along the computation, while majorization is indeed verified (8). Thus, a strong evidence exists that the MP will represent a fundamental tool for the design of future efficient quantum algorithms. The goal of this paper is to provide experimental evidence that this statement holds true for two quantum algorithms, the quantum Fourier transform and a recently proposed validation protocol.

Majorization Principle in a fast QFT — The class of phase-estimation algorithms, which includes Shor’s factorization and discrete logarithms (19), is of particular importance for the exponential speedup over the best available classical equivalents. Such quantum speedup is ultimately rooted in the efficient processing of the QFT routine, whose m-dimensional unitary evolution is given by .

In this article, we report on the experimental observation of the MP in the case of the QFT, where the routine is encoded in the optical modes. The corresponding transformation has been implemented on a photonic platform by adopting an efficient scheme developed by Barak and Ben-Aryeh (23) (BB) to minimize the number of optical elements required. This scheme represents the quantum analogue of the Fast Fourier Transform (qFFT), the well-known classical algorithm to efficiently calculate the discrete Fourier transform. By adopting this approach, valid for transformations of dimension , the necessary number of beamsplitters and phase shifters is significantly reduced to (23), from the elements needed for the most general decompositions (24). The qFFT has been realized on photonic integrated interferometers taking advantage of the 3-D capabilities of femtosecond laser writing (12); (13), which allows to arrange the waveguides in arbitrary and fully-scalable three-dimensional structures (14); (15). More in particular, the step-by-step reversed majorization can be directly monitored thanks to the sequential structure that naturally emerges from the BB decomposition, as shown in Fig.1. The observation has been carried out by injecting single-photon Fock states into three 8-mode integrated interferometers which corresponds to partial implementations of the qFFT protocol, with different degrees of completion. The number of fabricated interferometers , each consisting of layers of beamsplitters and phase shifters, corresponds to the number of layers in the decomposition of an 8-dimensional QFT. The last interferometer performs the complete 8-mode qFFT, where one photon encodes 3 qubits over the optical modes. The effective unitaries implemented by the physical interferometers, which differ from the ideal ones due to unavoidable experimental imperfections, have then been reconstructed. The reconstruction process has been performed by exploiting a-priori knowledge on the architecture, to estimate the transmissivities of the directional couplers and the relative phases in the phase shifters (35). Parameters have been retrieved by minimizing a suitable function with the single-photon and two-photon measurements. The fidelities between the reconstructed transformations in the and the ideal unitaries obtained with the decomposition are , and , thus confirming the quality of the fabrication process. The errors have been estimated with a Monte Carlo simulation, by sampling 1000 sets of new experimental data normally distributed around the ones measured.

For each input state and each partial transformation , the output probability distributions have been retrieved for the eight output states. The most convenient tool to convey the validity of the Majorization Principle is then offered by the Lorenz curve, a continuous piecewise linear function representing the partial cumulative for the most probable outcomes. For the MP to be satisfied, the curves at each step of the QFT must not cross, due to the inequality (1). As shown in Fig.2, a step-by-step reversed majorization is then observed between the output probability distributions of the three interferometers , i.e. by comparing , and according to (1).

Figure 2: Lorenz diagrams for a QFT Majorization experiment, with a single photon encoding three qubit on the first layer (blue), first and second layer (green) and complete structure (orange) of the 8-dimensional Fourier interferometer. For each intermediate , eight diagrams are shown relative to all possible single-photon input states. Each curve represents the partial cumulative probabilities for the most probable outcomes. Error bars are estimated with a Monte Carlo simulation, to take into account the sorting procedure.
Figure 3: Scheme of the validation protocol. The algorithm certifies full many-boson interference of Fock states (F) in a Fourier interferometer, against the alternative hypotheses of Distinguishable (D) and Mean-Field (MF) states.
Figure 4: Lorenz diagrams for a two-photon Majorization experiment on the first layer (blue), first and second layer (green) and complete structure (orange) of the 8-dimensional Fourier interferometer for the validation algorithm. a) Two photons in the same input mode (5,5). The distribution is the product of two QFT acting on the single photons. b, c) Input modes whose sum is odd (5,6) or even (5,7) respectively. d) Cyclic input (2,6) for the validation algorithm. Each curve is obtained by calculating the partial cumulative probabilities for the most probable outcomes in the case of distinguishable (black triangles) and indistinguishable (circles) photons. Shaded areas are included within the curves corresponding to fully indistinguishable photons (lighter regions) and to fully distinguishable photons (darker regions), as expected from the reconstructed unitary transformations. Error bars, smaller than the markers, have been estimated via a Monte Carlo simulation to take into account the sorting procedure.

Majorization Principle in a validation protocol — Quantum computation aims at developing algorithms able to outperform the classical counterparts on specific tasks. However, in this sought-after regime of a quantum supremacy, where standard computers no longer can check the results of a quantum device, the need for a quantum validation protocol becomes urgent and fundamental. This necessity arises prominently in the context of Boson Sampling experiments (25); (26); (27); (28); (29); (30); (32); (31), specialized devices whose task is to provide a first evidence of this future quantum supremacy (33). In this direction, various protocols have been developed (34); (11); (35); (36); (37); (38) and implemented (29); (30); (31); (15) to certify their correct functioning against undesired alternatives (39). One of these protocols, recently developed by Tichy et al. (11), allows to efficiently certify the source of a Boson Sampling experiment, ruling out alternative hypotheses which would yield output probability distributions similar to that with fully indistinguishable photons. In particular it was observed that, for symmetric input states of specific interferometers, quantum many-particle interference may determine the suppression of a large number of output combinations in a way efficiently predictable (11), i.e. without having to go through the calculation of a permanent, which is at the core of the computational complexity of the Boson Sampling problem (33). Specifically, denoting with a generic input state with indistinguishable photons in the mode of the -mode interferometer, the probability of having a certain output configuration , given one in the input , is given by

(2)

being the submatrix obtained by repeating () times the column (row ) of , and being the permanent of the matrix M (40). Indeed, let us consider a -dimensional Fourier interferometer described by . When injected with cyclic input states, i.e. -photon Fock states distributed over the input modes satisfying , with , and , they all result in the suppression of the output combinations which do not satisfy the relation , being the output mode of the photon (11); (31); (15).

The efficiency and scalability of this algorithm are crucial features for its application in a hard to simulate regime. Hence, we expect the MP to be always satisfied while certifying genuine many-photon interference for the cyclic input states. The experiment was carried out by injecting two-photon states into the three 8-mode integrated interferometers implementing partial instances of the qFFT. According to the validation test, a suppression of specific output configurations was expected due to the interference of symmetric states. This effect was indeed observed by measuring all two-photon coincidence events at the output of each for a given cyclic state, to retrieve the scattering probabilities of having two photons in the output modes . For the MP to be observed, the whole set of outcomes has to be recorded: this requirement involved the measurement of the eight bunching events , i.e. when two-photon exited from the same output mode. This measurement was carried out by adding, at the end of the fiber array coupled to the output of the interferometer, additional fiber optic splitters to redirect the bunched photons in two separate detectors. A detection system was then able to register all the one-to-one coincidences between any number of firing detectors. For all three , the probability distributions of 4 two-photon input states have been measured and plugged into (1) to test the validity of the MP. All 36 patterns for the partial cumulative probabilities can in fact be divided in four distinct classes (41), as shown in Fig.4. Non-crossing curves are expected for Fig.4a, since the probability distribution is the product of two single-photon QFT. Furthermore, we observe in Fig.4b-c that non-crossing curves are present also for non-cyclic input states, which are not employed in the validation protocol (11). This latter observation confirms that the occurrence of the MP does not imply optimality, since the principle does not provide a sufficient condition. Finally, the non-crossing Lorenz curves relative to the cyclic input state of the validation protocol (Fig.4d), manifesting that at each stage of the evolution, confirm the operation of the principle along the quantum algorithm.

Discussion — We have reported on the experimental demonstration of the Majorization Principle for two efficient quantum algorithms, the quantum Fourier transform and a recently proposed protocol for validating true many-boson interference. The observation was carried on an integrated photonic platform, realized by adopting a novel 3-D architecture fabricated via femtosecond laser writing technique. Single photon and two-photon measurements on an 8-dimensional Fourier interferometer have shown the occurrence of the Majorization Principle all along the two quantum protocols, by exploiting a fast decomposition of the evolution in discrete steps. The results obtained provide experimental evidence for the Majorization Principle, making it a promising guide for devising new quantum algorithms with a speedup over their corresponding classical counterpart. The good agreement with the expected distributions highlight the quality of the 3-D capabilities of femtosecond laser-writing, thus confirming it as an effective tool for addressing broader investigations on photonic platforms.

Acknowledgements. This work was supported by the ERC-Starting Grant 3DQUEST (3D-Quantum Integrated Optical Simulation; grant agreement no. 307783): http://www.3dquest.eu, and by the Spanish MINECO grant FIS2015-67411, FIS2012-33152, the CAM research consortium QUITEMAD+ S2013/ICE-2801, and U.S. Army Research Office through grant W911NF-14-1-0103 for partial financial support.


References

  1. M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information. (Cambridge University Press, New York, 2010).
  2. R. Cleve, A. Ekert, C. Macchiavello, and M. Mosca, Quantum algorithms revisited. (Proc. R. Soc. A, 1998)
  3. V. Vedral, Found. Phys. 82, 8 (2010).
  4. S. Lloyd, Phys. Rev. A 61 (1999).
  5. A. Galindo and M. A. Martin-Delgado, Rev. Mod. Phys. 74, 347 (2002).
  6. J. I. Latorre and M. A. Martin-Delgado, Phys. Rev. A 66, 022305 (2002).
  7. R. Orus, J. I. Latorre, and M. A. Martin-Delgado, Quant. Inf. Proc. 1 (4), 283 (2002).
  8. R. Orus, J. I. Latorre, and M. A. Martin-Delgado, EPJ D 29, 119 (2004).
  9. H. Goldstein, C. P. Poole, and J. L. Safko, Classical Mechanics (3rd Edition) (Addison-Wesley, 2001).
  10. R. P. Feynman and A. R. Hibbs, Quantum Mechanics and Path Integrals (McGraw Hill: 1965 Dover Publications).
  11. M. C. Tichy, K. Mayer, A. Buchleitner, and K. Mølmer, Phys. Rev. Lett. 113, 020502 (2014).
  12. R. Osellame, S. Taccheo, M. Marangoni, R. Ramponi, P. Laporta, D. Polli, S. De Silvestri, and G. Cerullo, J. Opt. Soc. Am. B 20, 1559 (2003).
  13. R. R. Gattass and E. Mazur, Nature Photon. 2, 219 (2008).
  14. N. Spagnolo, C. Vitelli, L. Aparo, P. Mataloni, F. Sciarrino, A. Crespi, R. Ramponi, and R. Osellame, Nat. Commun. 4, 1606 (2013).
  15. A. Crespi, R. Osellame, R. Ramponi, M. Bentivegna, F. Flamini, N. Spagnolo, N. Viggianiello, L. Innocenti, P. Mataloni and F. Sciarrino Nat. Commun. 7, 10469 (2016).
  16. A. W. Marshall and I. Olkin, Inequalities: Theory of Majorization and its Applications. (Acad. Press Inc., New York, 1979).
  17. G. Vidal and J. I. Cirac, Phys. Rev. A 66 (2002).
  18. L. K. Grover, Phys. Rev. Lett. 79, 325 (1997).
  19. P. W. Shor, SIAM J. Sci. Stat. Comp. 26, 1484 (1997).
  20. E. Bernstein and U. V. Vazirani, SIAM J. Comp. (1997).
  21. E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser, arXiv:quant-ph/0001106v1.
  22. A. M. Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. A. Spielman, In Proceedings of the 35th annual ACM symposium on Theory of computing (STOC, 2003).
  23. R. Barak and Y. Ben-Aryeh, J. Opt. Soc. Am. B 24, 231 (2007).
  24. W. R. Clements, P. C. Humphreys, B. J. Metcalf, W. S. Kolthammer, and I. A. Walmsley, arXiv:1603.08788v1.
  25. M. A. Broome, A. Fedrizzi, S. Rahimi-Keshari, J. Dove, S. Aaronson, T. C. Ralph, and A. G. White, Science 339, 794 (2013).
  26. J. B. Spring, B. J. Metcalf, P. C. Humphreys, W. S. Kolthammer, X. Jin, M. Barbieri, A. Datta, N. Thomas-Peter, N. K. Langford, D. Kundys et al., Science 339, 798 (2013).
  27. M. Tillmann, B. Dakić, R. Heilmann, S. Nolte, A. Szameit, and P. Walther, Nature Photon. 7, 540 (2013).
  28. A. Crespi, R. Osellame, R. Ramponi, D. J. Brod, E. F. Galvo, N. Spagnolo, C. Vitelli, E. Maiorino, P. Mataloni, and F. Sciarrino, Nature Photon. 7, 545 (2013).
  29. N. Spagnolo, C. Vitelli, M. Bentivegna, D. J. Brod, A. Crespi, F. Flamini, S. Giacomini, G. Milani, R. Ramponi, P. Mataloni, R. Osellame, E. F. Galvo, and F. Sciarrino, Nature Photon. 8, 615 (2014).
  30. J. Carolan, J. D. A. Meinecke, P. J. Shadbolt, N. J. Russell, N. Ismail, K. Wörhoff, T. Rudolph, M. G. Thompson, J. L. O’Brien, J. C. F. Matthews, and A. Laing, Nature Photon. 8, 621 (2014).
  31. J. Carolan, C. Harrold, C. Sparrow, E. Martin-Lopez, N. J. Russell, J. W. Silverstone, P. J. Shadbolt, N. Matsuda, M. Oguma, M. Itoh et al., Science 349, 711 (2015).
  32. M. Bentivegna, N. Spagnolo, C. Vitelli, F. Flamini, N. Viggianiello, L. Latmiral, P. Mataloni, D. J. Brod, E. F. Galvo, A. Crespi, R. Ramponi, R. Osellame and F. Sciarrino, Experimental scattershot boson sampling Sci. Adv. 1, No. 3, (2015).
  33. S. Aaronson and A. Arkhipov, In Proceedings of the 43rd annual ACM symposium on Theory of computing (ACM Press, 2011).
  34. S. Aaronson and A. Arkhipov, Quantum Inform. Compu. 14, 1383 (2014).
  35. A. Crespi, Phys. Rev. A 91, 013811 (2015).
  36. L. Aolita, C. Gogolin, M. Kliesch and J. Eisert, Reliable quantum certification of photonic state preparations Nat. Commun. 6, 8498, (2015).
  37. M. Walschaers, J. Kuipers, J. D. Urbina, K. Mayer, M. C. Tichy, K. Richter and A. Buchleitner, Statistical benchmark for BosonSampling. New J. Phys. 18, 032001, (2016).
  38. M. Bentivegna, N. Spagnolo and F. Sciarrino, Is my boson sampler working? New J. Phys. 18(4), 041001, (2016).
  39. C. Gogolin, M. Kliesch, L. Aolita, and J. Eisert, arXiv:1306.3995v2.
  40. S. Scheel, Acta Physica Slovaca 58, 675 (2008).
  41. See Supplemental Material for details on the quantum-to-classical transition and on the patterns of 2-photon distributions in Fourier interferometers.
100886
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
""
The feedback must be of minumum 40 characters
Add comment
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question