A Rank-One Reflecting PS in Ion Traps

Quantum-enhanced deliberation of learning agents using trapped ions

Abstract

A scheme that successfully employs quantum mechanics in the design of autonomous learning agents has recently been reported in the context of the projective simulation (PS) model for artificial intelligence. In that approach, the key feature of a PS agent, a specific type of memory which is explored via random walks, was shown to be amenable to quantization, allowing for a speed-up. In this work we propose an implementation of such classical and quantum agents in systems of trapped ions. We employ a generic construction by which the classical agents are ‘upgraded’ to their quantum counterparts by a nested process of adding coherent control, and we outline how this construction can be realized in ion traps. Our results provide a flexible modular architecture for the design of PS agents. Furthermore, we present numerical simulations of simple PS agents which analyze the robustness of our proposal under certain noise models.

pacs:
07.05.Mh, 03.67.Lx, 37.10.Ty, 05.40.Fb

I Introduction

In the past decades, quantum physics has been employed to enhance communication and information processing with significant success, laying the foundation for the now well established fields of quantum computation and quantum information (1); (2); (4); (3); (5). In contrast, the potential of merging the related, but distinct, field of artificial intelligence (AI) with quantum physics is significantly less well-understood. Thus far, advances in this field have been reported mostly for algorithmic approaches to applied AI-related tasks, e.g., (un-)supervised data clustering and process replication, where selected quantum algorithms could be utilized (6); (10); (7); (9); (8).

On the other hand, the first result showing that quantum mechanics can also aid in the complemental task of designing autonomous learning agents—a task more closely related to robotics, and embodied cognitive sciences—has only recently been provided in Ref. (11). This work is embedded in the framework of projective simulation (PS) for AI, introduced in Ref. (12). The central component of PS is a specific memory system utilized by the agent. This memory system, called episodic and compositional memory (ECM), provides a platform for simulating future action before real action is taken. The ECM can be described as a stochastic network of so-called clips, which represent prior experiences of the learning agent, whose decision-making process is realized by a stochastic random walk in the clip space. In the agent’s design, it is the specific structure of the ECM that is particularly suitable for quantization.

In this work we present a proposal for the experimental implementation of both classical and quantum PS agents in systems of trapped ions. While the classical variants of PS agents can easily be realized in physical systems without requiring quantum control, we show here how certain implementations of classical agents in ion traps can be used to construct quantum PS agents. This is achieved in a generic way through a nested process of adding coherent control.

The outline of this paper is as follows. In Section II we briefly review the PS model and give the basic operational elements which have to be constructed in an implementation of a classical or quantum PS agent. Then, in Section III we give a more formal treatment of the standard, classical PS agent, and show explicitly how such an agent may be implemented in an ion trap set-up. In particular, in Section III.3, we discuss how the technique of adding coherent control provides a generic construction for emulating the standard PS agent in quantum systems, specifically in trapped ions. Finally, in Section IV, we extend our analysis to quantum PS agents by specifying all required operations and describing their implementation in ion traps. In the Appendix we further present a simple example for a quantum PS agent that can be straightforwardly implemented in an ion trap, for which we provide numerical simulations incorporating an appropriate error model.

Ii Projective Simulation

The central component of a PS agent, illustrated in Fig. 1, is the episodic and compositional memory, which can be formally represented as a stochastic network of clips. Clips represent the units of episodic memory, which consist of memorized percepts, actions and ensuing rewards. The process of projective simulation is triggered

Figure 1: Projective simulation agent. The (PS) model for active learning agents, introduced in Ref. (12), describes an embodied agent that interacts with its environment via sensory input (percepts), and action on the environment that is conducted using a set of actuators. The sensors and actuators are linked to the episodic compositional memory (ECM), which relates new perceptual input to the agent’s past experience.

by perceptual input that initiates a random walk over the clip space. This walk constitutes the stochastic replay of previously established memories and precedes the initiation of real action. The agent’s capability to learn is represented by two mechanisms, (i) the adaption of the transition probabilities between the clips, and (ii) the addition of new clips under compositional principles.

More formally, at any instance of time the ECM of an agent can be represented as a directed weighted graph, where the vertices represent the clips, and the weights of the edges represent the transition probabilities, see Fig. 2. We refer to this graph as the clip network. The random walk, or equivalently, the Markov chain, associated to the process of projective simulation is carried out over the clip network. Finally, the learning aspect of the agent is realized by updating the clip network based on the (re)actions and rewards of the environment, with which it interacts.

The criteria under which an action, that is, a clip representing a single memorized action in the ECM, is coupled out as real action can vary, leading to distinct types of PS agents. Here we list a few examples that we will encounter again later in this paper. In the so-called standard PS model, the first action clip that is encountered during the random walk over the clip network is coupled out as the chosen real action. The standard PS model can further be equipped with emotion clips, which are clip tags indicating, for instance, whether a chosen action recently lead to a reward. In this extended model, the random walk process can be iterated if the encountered action clip carries a ‘negative’ association —a process we will refer to as reflection.

Figure 2: Clip network. An example for a network with four clips  is shown. At any fixed time, the PS agent associates a discrete-time homogeneous Markov chain with transition matrix  to the ECM, which governs the transition probabilities for a random walk in the network. In addition, flags, here indicated on clip , may be introduced, e.g., to relate actions that were recently rewarded to the corresponding percepts.

Elaborating on the notion of reflection, in Ref. (11) some of the authors have recently introduced the reflecting PS (RPS) agent model, in which the Markov chain associated to the clip network is ergodic, and hence has a unique stationary distribution over the clip network. The RPS agent continues the random walk until the stationary distribution is reached, and (iteratively) samples from it until an action clip is observed. Building on the approaches of Refs. (13); (14) for quantizing random walks, this particular model was shown to have a quantum analog, called quantum RPS, which permits a quadratic speed-up in active learning scenarios.

As we have mentioned previously, the PS model can be endowed with additional structures, such as the aforementioned emotion tags, which further improve the agent’s learning capacity, see Ref. ?. These additional structures are, in principle, compatible with the constructions we present, but we shall only utilize the simplest of these extensions, so-called flags, in the examples that are considered in the Appendix. As we will discuss, these flags allow for the demonstration of a quantum speed-up when incorporated into a very simple agent design, which is readily implementable in current laboratories.

In the next section, we present a more formal treatment of the standard PS model, and show how it can be implemented in an ionic set-up.

Iii Standard PS agent

As noted, in the PS model, the ECM is represented as a clip network, that is, a weighted directed graph over the set of vertices , where each  represents a clip. The directed weighted edges of the graph represent the transition probabilities from one clip to another 1 given by a transition matrix which is an

left-stochastic matrix, that is,  and . In the standard PS, we can assume the clip network always contains clips which are representations of individual percepts (from the set of percepts ) as well as clips that represent individual actions (from the set of actions ), where  2. When presented with a percept , the standard PS initiates a random walk in the clip network, governed by , and starting from (the clip corresponding to) . The walk is terminated at the first instance an action clip is encountered. This action is then coupled out as a real action.

This process can be viewed in terms of probability vectors as follows. Each clip can be represented as a canonical basis vector of an -dimensional real vector space , that is, , with the unity at the position. The state after one random walk transition is

(1)

which is a probability vector, i.e., a vector with real non-negative entries summing to one, representing a probability distribution over the clip space. This distribution is then sampled from, obtaining some clip , which, if it represents an action, is coupled out. Otherwise the random walk proceeds from .

In the spirit of the reinforcement learning paradigm, each round of interaction with the environment is either rewarded or not, and both cases lead to an update of the clip network, by altering the transition probabilities, and/or by altering the clip set itself, which constitutes the learning aspect of the PS agent. For an overview of the standard PS model, including examples of update rules, see Ref. ?.

iii.1 Standard PS with Trapped Ions

We shall now discuss how the random walk initiated in an standard PS agent can be emulated in a quantum system, in particular, using laser pulses on a string of trapped ions. Although a quantum implementation is not strictly required for the classical random walk of the standard PS agent, such a construction is the prerequisite for the fully quantized RPS agent that we will discuss in Section IV. For the construction of a quantum mechanical analogue of the transition matrix  we start by promoting the real vector space  to a complex Hilbert space  , and representing the clips  as orthonormal basis states . We then construct a unitary , such that for a fixed basis state denoted —this may correspond to some clip state   but the particular choice of this fixed state is unimportant —the components of the state  with respect to the clip basis encode the transition amplitudes as dictated by the transition matrix , i.e.,

(2)

We can see that a measurement of the state above in the clip basis recovers the right-hand side of the classical Eq. (1). However a single unitary cannot encode all the transitions of . This can be seen quite simply, by noting that the columns of the matrix representation of  are required to be orthogonal, while the columns of  may even be identical. In general, one therefore requires  distinct unitaries  to represent all transitions of  on an -dimensional Hilbert space. In other words, the first column, corresponding to the basis state , of the unitary  determines the transition probabilities from the clip  to any other clip in the sense of Eq. (2). Eq. (1) could be recovered even if the amplitudes in Eq. (2) had arbitrary relative complex phases. These phases are irrelevant in the context of the classical agent, but for the purpose of the extension to the quantum RPS we restrict the entries of the first column of  to be real and positive.

Note that, given the set of unitaries each corresponding to a column of an -state transition matrix , one can emulate any classical random walk by iterating the measurement of the quantum register (in the clip-basis), resetting the register to the state , and applying the  corresponding to the prior measurement result. The capacity to generate such unitaries will, in the next section, be used as a primitive to construct coherent quantum walks. Here we first analyze how such unitaries can be realized in an ionic set-up.

To proceed, we wish to encode the clip basis in the internal states of a chain of trapped ions, and the unitaries  in the laser pulses driving the transitions between them. We will consider a setup as described, e.g., in Refs. (16); (17). A string of Ca ions is confined by a quadrupole trap (Paul trap). The ion confinement can be described by harmonic potentials, and the Coulomb repulsion of the ions couples the harmonic oscillators, such that the motion of the ions can be captured in terms of their collective normal modes. For each ion, two Zeeman sub-levels, for instance, and , which can be coupled by a quadrupole transition, are used to represent the computational basis states of a single qubit. In turn, we employ the state space of  qubits as a representation of the clip network. Hence, the PS implementation we propose requires  ions for a network of  clips.

The required unitaries can be realized with two laser beams (16); (17), one of which is a broad beam that is nearly collinear to the ion chain, such that all ions are illuminated. The second laser beam can be focussed to address each ion individually. When operated resonantly at the frequency  corresponding to the transition , the first laser laser realizes the collective gate

(3)

where we use the shorthand notation  for , i.e., the Pauli  operator for the -th qubit. The second laser, on the other hand, is applied off-resonance to provide the single-qubit gate

(4)

The operations of Eqs.(3) and (4) can further be complemented with an entangling gate, such as the Cirac-Zoller (18) or Mølmer–Sørensen (19) gate, to form a universal set of quantum gates, and hence provide the possibility to construct the unitaries  in principle. In general, the aim is to determine a sequence of operations with  free parameters , such that all entries of the first column of the resulting overall unitary  are real and positive, and for appropriate choices of the  their squares can form any arbitrary probability distribution , with . The freedom in the choice of parameters allows for all of the operators  to be represented by some specific choices of the . In particular, the agent is considered to operate based on a fixed internal architecture, in particular the tuning of the angles should have a simple operational meaning. At every step of the learning process, the agent only updates a set of parameters, here the , corresponding to the duration of some laser pulses within a fixed sequence. For instance, in the very simple case of a clip network with only two clips, the required unitary can be chosen to be a Pauli- rotation of a single qubit, given by

(5)

which can be realized by three laser pulses, i.e.,

(6)

and where we have included the qubit label  for later convenience.

As we have mentioned earlier, the ‘probability unitaries’ presented above will become the building blocks of the quantum PS agent. The second, and last, crucial ingredient in our construction is the technique of adding coherent control, which we shall briefly present next.

iii.2 Coherent Controlization

Adding coherent control entails coherently conditioning (unitary) operations on the state of a control system. More formally, this is represented as a mapping from a set of unitaries , acting on a Hilbert space , to a single controlled unitary  of the form

(7)

which acts on  , where  is an (at least) –dimensional Hilbert space, and  is an orthonormal basis of  . Practically, this mapping may be understood as a physical procedure of adding quantum control to individual elementary operations (20). We refer to such mappings and the associated physical processes, which implicitly feature in many quantum algorithms (3); (21), as coherent controlization. As we will discuss in Section IV, coherent controlization forms an essential part of the construction of the quantum RPS agent.

As a first instance of its applicability, coherent controlization provides an elegant method to generically assemble and combine probability unitaries. The latter may also be assembled in other, sometimes more efficient ways, and one alternative construction is provided in the Appendix. Nonetheless, the construction of the probability unitaries using coherent controlization offers the opportunity to illustrate this method on a simple and useful example.

Before we begin, let us recall the task at hand. For a given probability distribution , corresponding to the -th column of the stochastic matrix , we wish to construct the associated unitary , such that the first column of  has real and positive entries , with .

As the elementary operations that depend on these parameters we select single-qubit  rotations , which, for a trapped ion setup, may be realized as in Eq. (6), and where we drop the label  for ease of notation. Any probability unitary  on an -clip network can then be assembled by a nested scheme of coherent controlization on  qubits, where  is the smallest integer that is larger than . For simplicity, let us assume here that the size of the clip network is such that , which can always be achieved by duplicating some clips.

Figure 3: Coherent controlization. The circuit diagrams show the construction of a three-qubit probability unitary (9), using coherent controlization. The filled dots “” on the controlled operations indicate that the unitaries on the target are conditioned on the control qubit state , while the hollow dots “” represent conditioning on the control qubit state .

For a two-clip probability distribution , the probability unitary is trivially realized by a single-qubit rotation , with and . To extend this to a four-clip probability unitary , with probability distribution , one adds a second qubit, hence , and starts again with the operation  on the first qubit, where  and . This is followed by two controlled rotations of the second qubit, conditioned on the state of the first, that is,  is applied if the first qubit is in the state , while  is applied when the first qubit is in the state . The corresponding angles are determined from the renormalized probabilities within the respective subspaces, i.e., and .

For larger values of , the controlization becomes nested, see Fig. 3, e.g., for (), the lowest level of single qubit operations, here  and , is followed by controlled operations on a third qubit. Labeling the qubits as , and , we may write the corresponding probability unitary as

(8)

where the controlled two-qubit operations are given by

(9a)
(9b)

As we have argued above, coherent controlization allows for the construction of general probability unitaries from basic single-qubit probability unitaries. Despite the simple appearance of the circuits in Fig. 3, the practical implementation of coherent controlization requires additional attention. In fact, it is generally impossible to decompose quantum-controlled operations into individual gates , such that the  are independent of , which implies that the gates  may not be specified if  is unknown (22); (23). This seems to suggest that coherent controlization requires computational effort in its implementation. However, for the ionic implementation that we will discuss next, we exploit additional degrees of freedom of the physical setup to perform coherent controlization in a generic way.

Figure 4: Level structure of trapped ions. An illustration of the energy levels of one of the ions in the trap is shown. Two levels,  and , are chosen to represent the qubit, while the auxiliary levels  and , and the first excited state  of the common vibrational mode are used in the process of coherent controlization. The transitions indicated by , , , and  can be realized by appropriately detuned -pulses.

iii.3 Coherent Controlization in Trapped Ions

We shall now discuss how quantum control can be practically added to unitaries that are realized by laser pulses in a trapped ion setup, based on the scheme introduced in Ref. (20). As an example we give the explicit pulse decomposition that realizes the two-qubit unitary , which can be viewed as a special case of Eq. (9a) for , where we use two ions, labeled  and , respectively, before we explain how this method is generalized to the control of -qubit unitaries.

To start, we note that the operation can be trivially implemented by the pulse sequence of Eq. (6), and we can thus focus our attention on the remaining term . Apart from the laser pulses for the elementary operations  and , our scheme for their coherent controlization also consists of a number of additional  rotations in -dimensional subspaces of the ionic energy levels other than the one spanned by  and , see Fig. 4. We will use additional superscripts, e.g., , where the labels “” identify different detuning frequencies, and the subscript  identifies the ion, to distinguish these operations. Furthermore, we make use of one of the common vibrational modes, which we assume has been cooled to the ground state , before the following steps are executed.

  1. Cirac-Zoller (18); (24) method: A sequence of appropriately blue-detuned laser pulses is applied on ion  to realize , which transfers the population of  to . This step encodes the state of qubit  in the vibrational mode, i.e., the initial state of the form is transformed to .

  2. Hiding: Red-detuned laser pulses corresponding to  and  are applied to ion  to transfer the populations from  to  , as well as from  to , as illustrated in Fig. 4. Denoting the state  encoded in the levels  and  as , we may write the overall state after this step as .

  3. : The pulse sequence that realizes  is applied to ion , which leaves the system in the state .

  4. Switching: To exchange the primed and unprimed levels, laser pulses for  and , which are blue- and red-detuned, respectively, are applied to ion , see Fig. 4. The resulting overall state after these operations is .

  5. : The pulse sequence that realizes  is applied to ion , such that the system is now in the state .

  6. Switching: The primed and unprimed levels are exchanged again using the laser pulses for  and  on ion , which leaves the system in the state .

  7. Unhiding: The hiding operations of step (ii) are reversed by the application of  and  to ion , leaving the system in the state .

  8. Return control: Finally,  is applied to ion , which returns the control from the vibrational mode, and a provides the desired state , that is, the unitary acts on ion , when ion  is in the state , while acts upon the subspace in which the first ion is in the state .

If required, the scheme laid out in steps (i)-(viii) may be straightforwardly extended to larger clip spaces by increasing the number of control qubits and vibrational modes used. Each rotation in principle requires  individual pulses, see Eq. (6), but the collective rotations for the operations  can be subsumed into two single pulses  and  at the start and at the end of the entire pulse sequence, respectively. We hence find that the overall number of elementary laser pulses necessary to assemble a -qubit probability unitary is given by for . Note that an exponential scaling in terms of the qubits used is inevitable, as qubits encode probabilities, and we must have the freedom to specify each one of these. In terms of the state space of the ECM network (clip number) the scaling is linear.

In such a process vibrational modes of different frequencies are used to generalize steps (i) and (viii) to condition -qubit operations on the state of the first qubit, i.e., by transferring the populations (exclusively) between and .

Next, we give the basics of the classical and quantum RPS agent models, and show how the two components—coherent controlization and probability unitaries—can be utilized to construct these in systems of trapped ions.

Iv Reflecting PS with Trapped Ions

We now turn to the so-called reflecting projective simulation (RPS) agent introduced in Ref. (11). The central aim of the RPS is to output the actions according to a specific distribution, which we shall specify shortly, that is updated, indirectly, as the ECM network is modified throughout the learning process. Here, the clip network  is disjoint, and it comprises unconnected percept-specific subnetworks with associated stochastic (ergodic and time-reversible) matrices , for each percept .

Depending on which percept is observed, the random walk is executed on the corresponding percept-specific (sub-)network, where it is continued until the Markov chain  is (approximately) mixed, that is, until the respective stationary distribution , which has support over the entire clip space, is (approximately) reached. The agent then samples from the obtained distribution, and iterates the procedure (which requires re-mixing of the Markov chain) until an action is hit. More specifically, the RPS agent is designed to output (a good approximation) of the tailed distribution defined as

(10)

where is a normalization factor such that . That is, the re-normalized distribution truncated such that it has support only over the action space.

Despite the differences in the walk termination criteria of the standard PS and RPS models, all the operational elements required for an emulation of a classical RPS agent in an ionic set-up have already been presented in the last section, as the previously described construction enables the emulation of any classical random walk.

In the remainder of this section, we aim to show how the quantum RPS agent, which employs a truly coherent quantum walk (in the sense of (13); (14)) to obtain a quadratic speed-up over the classical RPS agent, can be implemented based on the coherent controlization of unitaries as discussed in Section III.3. For notational simplicity, we will from this point on ignore the subscript  indicating the percept the network in question corresponds to, unless it is specifically required.

The central process of the quantum RPS model, the basics of which we present next, is a so-called Szegedy-type quantum random walk, see, e.g., Ref. (14), that is performed on the percept-specific ECM (sub-)network. These Szegedy-type quantum random walks are used in the quantum RPS agent in order to output an action distributed according to the tailed stationary distribution with a quadratically decreased number of elementary diffusion steps, as compared to a classical RPS agent.

As the structure of this decision-making process is rather involved, let us briefly sketch it out here, before proceeding in more detail. The basic building block of a Szegedy-type walk, is the elementary diffusion unitary , which acts on a two register system, each one of sufficient dimensionality to represent the entire clip network. One application of can be considered as the analog of one step of the classical walk governed by the transition matrix . The Szegedy walk operator , on the other hand, is constructed using four applications of (or its inverse), and some quantum operations which are independent from . One of the distinct properties of the operator  is that its unique  eigenstate  is a particular coherent encoding of the stationary distribution  of the Markov chain. Exploiting this property, and using a modified Kitaev phase estimation algorithm (21), we can construct an approximate reflection operator (ARO), which reflects over the state . The speed-up achieved in the quantum RPS originates, in part, from the efficiency of the construction of the ARO operator in terms of the number of applications of the diffusion unitary , relative to the mixing time of the Markov chain as specified by .

The ARO operator above can then be used in search algorithms (e.g., as in Refs. (13); (14)), as well as in the decision-making process of the RPS agent, which can be seen as a Grover-type (4) reflection process in the following sense. Upon the system, initialized in the state , one sequentially applies a ‘check’ operator, which adds a relative phase of to all basis states corresponding to actions, followed by the ARO operator, which reflects over the coherent encoding of the stationary distribution. This, like in the Grover algorithm, induces a sequence of rotations in a -dimensional workspace, which, after a certain number of iterations, guarantees that the system state has a constant overlap with the state encoding the aforementioned tailed distribution. The second component of the quantum speed-up lies in the number of these iterations, which inherits the quadratic improvement that is characteristic to Grover’s algorithm. With this in mind, let us now give further details of the building blocks of the quantum RPS.

iv.1 The Szegedy Walk Operator

As we have argued previously, a unitary on an -dimensional Hilbert space is not capable of representing all transitions of an arbitrary Markov chain over a network of  clips. For this reason, the classical random walk for a given transition matrix  that we have described in Section III.1 is realized by, in general,  unitaries , where  is associated with the -th column of . In the Szegedy-type approach to quantum

(a)
(b)
(c)

Figure 5: Szegedy walk operator. The circuit representations of the Szegedy walk operator  of Eq. (14), as well as the reflections over  and  [see Eq. (13)] are shown in Fig. 5 (a), (b), and (c), respectively. The reflection over  () is fully determined by the walk operator  () and a reflection over , i.e.,  .

walks, two copies,  and , of an -dimensional Hilbert space, i.e., are used to accommodate for all the required degrees of freedom. For a time-reversible Markov chain we define the unitary walk operators  and  as

(11a)
(11b)

where form bases of . The unitaries  act on according to

(12)

In the context of quantum RPS agents, we assume that the underlying ergodic Markov chain is time-reversible, i.e., it satisfies detailed balance. Although the Szegedy-type walk can be defined even if this is not the case, one would additionally require access to the time-reversed transition matrix3 in such a situation. Here, we will present the construction in the most general terms, with the implicit understanding that for the RPS, the unitary can be obtained from by swapping the registers prior to, and after the application of . With the operators and at hand, we can now proceed with the construction of the Szegedy walk operator , which is implemented by reflecting over the spaces  and , defined as

(13a)
(13b)

The generalized walk operator is then defined as

(14)

where, for , we have

(15)

The two operators  and  are constructed from the diffusion operators,  and , along with reflections over , and  denoted  and , respectively, as shown in Fig. 5. The unique  eigenstate  of the Szegedy walk operator , which coherently encodes the stationary distribution  on the two registers, is given by

(16)

iv.2 The Approximate Reflection Operator

The next step in the design of a quantum RPS agent is the construction of the approximate reflection operator (ARO) from the walk operator . The ARO operator is designed to approximate the (ideal) reflection operator

(17)

With the generalized walk operator  at hand, an approximate reflection over  is obtained (14) by implementing the phase detection operator , a modification of Kitaev’s (21) phase estimation algorithm, shown in Fig. 6. For this task, we add  ancilla qubits, where  scales as , where  is the spectral gap of the Markov chain, i.e., is the second largest eigenvalue of . We employ  and its inverse operation, with an intermediate reflection over the ancilla state . This combination of operations approximates the reflection over  from Eq. (17). An analysis of the fidelity of the reflection, as a function of , is given in Ref. (14). The crucial feature of this construction is that the ARO operates based on a number of calls to  that scales as  4, while the number of calls to  to prepare the stationary distribution for the classical RPS scales as .

(a) (b)

Figure 6: Phase detection and approximate reflection. The circuit in Fig. 6 (a) shows the phase detection operator , which forms part of Kitaev’s phase estimation scheme (21). Registers  and  are complemented by  ancilla qubits, here labeled Aux , Aux , …, Aux  , which are all initialized in the state  , followed by Hadamard gates . The executions of the -th power of  is then conditioned on the state of qubit Aux  , before another Hadamard gate is performed. In Fig. 6 (b) the approximate reflection operator (ARO) is combined from the phase detection circuit  and its inverse , with an intermediate reflection over the ancilla state .

iv.3 Quantum deliberation

To output a distribution of actions that corresponds to the tail of the stationary distribution with support only over the (flagged) actions, the agent performs a quantum deliberation process with elements reminiscent of Grover-like steps (4); (14). In the preparation phase, the agent first initializes the joint system of registers  and  in the state  from Eq. (16). While the preparation of this initial state may be involved in general, in certain cases, including the one presented in the appendix, it becomes straightforward. Consecutively, the agent alternatingly applies the following two operations:

  1. Reflection over the actions:

    (18)

    where  denotes the set of (flagged) actions.

  2. Approximate reflection over the state .

The sequence of operations above will, similarly to Grover’s algorithm, increase the amplitude of the actions with respect to non-action components in the state of the system, while maintaining the relative weights of the action elements. This ensures that the actions are output according to the correct distribution, as explained in (11).

After iterating these steps a number of times that is determined by the relative probability of the actions within the stationary distribution, the agent samples, that is, measures in the clip basis of register . If a desired action is found, it is coupled out, otherwise the procedure is repeated (11). The average number of iterations of the Grover-like steps (i) and (ii) scales as , while the classical RPS agent requires  iterations on average.

iv.4 Reflecting PS Implementation for Trapped Ions

Finally, let us examine the possibility to implement the decision-making process of a quantum RPS agent in an ion trap. As we have explained, two operators are required, the reflection over (flagged) actions, and the ARO. The former can be generically achieved, for instance, by applying the detuned pulses corresponding to  or  of the coherent controlization step (iv) specifically to those basis states corresponding to (flagged) actions, flipping their sign. The latter, the ARO, is implemented starting from the probability unitaries, by coherent controlization, in conjunction with a few fixed operations, , and .

Let us briefly describe the individual steps of this procedure. By coherently conditioning the probability unitaries , the operation  is obtained, from which the pulse sequence for  is obtained by swapping the registers, which, in practice, corresponds to an exchange of the qubit/ion labels in the pulse sequence for . The associated inverse operators follow immediately by setting . The reflections , , and  are obtained as special cases of the reflection over the (flagged) actions. The Hadamard gate

(19)

can be implemented up to a phase of , that is, for the -th ion we have the pulse sequence

(20)

with  as in Eq. (3), and  given by Eq. (4). The superfluous phase  cancels naturally, since the Hadamard gate is used four times for every ancilla in the ARO, twice each for the realization of  and its inverse, see Fig. 6. Finally, we make again use of coherent controlization to construct the phase detection operator  and its inverse from the walk operator . The possibility to add control to arbitrary (unknown) unitaries hence provides a modular structure, that allows, in principle, for the generic implementation of all operations that required for the decision-making of a quantum RPS agent. The modular use of coherent controlization in the design of the agent can thus be summarized by the following sequence:

That is, starting from single qubit  rotations, parameterized according to the stochastic matrix , we construct the probability unitaries using coherent controlization. From the probability unitaries we then construct, again by coherent controlization, and , which are used to assemble . Finally, from  we construct the ARO operator that is central to the quantum deliberation steps, once again employing coherent controlization.

As we have argued, all individual operations of the quantum RPS are implementable with current technology. While large network sizes, as well as small values of  or , impose challenges for state-of-the-art ionic implementations of the generic RPS decision-making process, these technological restrictions may be overcome by the continuing development of scalable ion trap arrays. Nonetheless, special cases of the general scheme we have laid out here are well within reach of experimental testing. In the Appendix, we present such an example for a quantum RPS agent based on an ECM using two qubits, and we give an explicit pulse decomposition of its entire decision-making process, including an error analysis.

V Conclusions

We have presented a modular architecture for the implementation of the deliberation process of PS agents in systems of trapped ions. We have shown first how the probability unitaries, which are required for the emulation of classical random walks, can be generically constructed using coherent controlization, and second how this process allows for the implementation of a quantum RPS agent based on these probability unitaries. A main feature of our construction is its modular architecture, that is, any changes of the probabilities as part of the learning process can be dealt with at the level of the implementation of the probability unitaries, whereas the rest of the construction is unaltered. The generic construction relies only on elementary single-qubit  rotations and coherent controlization, which allows for a straightforward assembly, as well as straightforward updating of the probability unitaries.

This is an important advantage, if not a prerequisite, for the realization of a learning agent that is continuously adjusting the probabilities underlying its deliberation process. Having to re-compute the entire sequence of gates which need to be applied to realize the quantum RPS agent for any change of the underlying Markov chain would impose a large computational overhead on the agent, and significantly diminish the advantage in speed that is provided by quantizing the RPS agent.

In addition to the general modular architecture, we have provided numerical simulations of an implementation of simple RPS agents using trapped ions. As our investigation shows, proof-of-principle realizations of these agents are simple enough to be implementable in current experimental setups, while they are sufficiently involved to demonstrate the quadratic speed-up.

Acknowledgements.
We are grateful to Adi Makmal, Markus Tiersch, Benjamin P. Lanyon, Daniel Nigg and Thomas Monz for valuable discussions and comments. HJB acknowledges discussions with Gavin Brennen at an early stage of this project. This work has been supported in part by the Austrian Science Fund (FWF) through the SFB FoQuS: F4012 and the Templeton World Charity fund grant TWCF0078/AB46.

*

Appendix A Rank-One Reflecting PS in Ion Traps

Here, we provide an example for a quantum RPS agent sophisticated enough for the demonstration of a quantum speed-up, whilst being sufficiently simple to allow an immediate implementation in readily available ion trap setups, e.g., as described in Refs. (16); (17). The Appendix is structured as follows. In Section 1 we first discuss the simplified decision-making process for a quantum RPS agent whose underlying ECM network corresponds to a rank-one Markov chain. To provide context, the role of these simple agents is then illustrated for the invasion game in Section  A.2. In Section A.3, we propose an ion trap implementation of the rank-one quantum RPS agent, for which we supply the explicit overall pulse sequence. We accompany our proposal with an appropriate error model, and corresponding numerical simulations, which are given in the final Section A.4.

a.1 Rank-One Reflecting PS

A special case of the RPS agents that we have considered in Section IV is obtained by considering the reflective analog of so-called “two-layered” PS agents, where all transition are one-step transitions from percepts to actions (11). Such agents have a very simple structure, yet were shown to be capable of learning to solve non-trivial environmental tasks ?; (25). In the RPS analog of two-layered PS agents (11), the associated Markov chains of each percept-specific clip network are rank-one throughout the entire learning process of the agent. The columns of  are then all identical, and equal to the stationary distribution. The spectral gap is given by , and the Markov chain mixes in one step. Let us consider the consequences—radical simplifications—for the construction of the RPS agent.

(a) (b) (c)

Figure 7: Rank-one reflection operator. For rank-one Markov chains, and  are local operations on registers  and , respectively. The Szegedy walk operator  that is shown in Fig. 7 (a) hence factorizes into two independent applications of . Since the walk operator further becomes Hermitean, , the single remaining ancilla is also redundant, the approximate reflection circuit shown in Fig. 7 (b) reduces to one application of  as shown in Fig. 7 (c), and the reflection becomes exact.

In the rank-one case, the probability unitaries  for a fixed  are all the same, so we may remove the subscript, write only , but we keep in mind the distinction of  and . Moreover, coherent controlization is no longer necessary for the construction of , since  is applied regardless of the state of the control register, (). As can be easily seen, the reflections  and  shown in Fig. 5 then commute, acting locally on registers  and , respectively, see Fig. 7. Similarly, the coherent encoding of the stationary distribution is now given by the product state .

When assembling the phase detection operator  and the approximate reflection operator (ARO), see Fig. 6, the spectral gap of  means that (at most) one ancilla qubit is required. Now, note that the walk operator  for rank-one matrices , as shown in Fig. 7 (a), is Hermitean, and thus the entire circuit shown in Fig. 7 (b) reduces to a single application of the Szegedy walk operator . An exact reflection over  can hence be performed by applying to either of the registers, see Fig. 7 (c). Without loss of generality we select register , where we drop the subscript indicating the register from now on, to perform all the Grover-like steps to output actions according to the tailed stationary distribution, which entails the following steps.

In the preparation stage, the state  is initialized by one application of  to the state . Then, the two operators of the Grover-like process, i.e., the reflection over the action , and the reflection over , are applied a prescribed number of times determined by , the relative probability of the actions within the stationary distribution. Consecutively, the agent measures in the clip basis. If the measurement provides an action, it is coupled out, otherwise the agent iterates this procedure.

Before we continue with the ionic implementation of the deliberation process, let us briefly examine an example for a task—the invasion game—for which the agent may employ its capabilities of learning and decision-making.

a.2 The Invasion Game

As a simple example that can be solved by two-layered agents, let us discuss the invasion game, as considered in Ref. (12). In this game, the agent is tasked with guarding a region of space from an adversary who attempts to enter the region through an array of entrances, see Fig. 8. The agent’s goal is to prevent the adversary from entering by blocking sites. In every round of the game, the adversary has three possible moves. It may attempt to enter at its current location, or move one door to the left, or one door to the right and attempt to enter through one of these openings. The agent is rewarded if it matches the move, thus blocking the adversary.

To emphasize the learning aspect of the game, we assume that the game starts with the adversary and the agent located at the same entrance, and before the adversary moves, it displays some signal that indicates which way he intends to move next. Thus, the set of percepts of the agent (the defender) is , which hint at the possible subsequent move of the attacker. The agent itself can also choose to remain where it is, move left, or move right in an attempt to block, corresponding to the three action clips , and  accessible to the agent.

For the RPS agents discussed previously, this simple game may be represented by associating a three clip network to each of the percepts. In what follows, we shall only focus on a network associated to one percept, say “”, as everything will also hold for other subnetworks as well, and we shall drop the corresponding subscript for ease of notation. For such two-layered settings there is a simple construction relating the probabilities of outputting a particular action, and the structure of the underlying percept-specific Markov chain. In particular, the action probabilities are realized by the stochastic matrix where each column is the vector . The learning of the agent manifests in the relative increases of probabilities corresponding to rewarded actions, and examples for specific update rules can be found, e.g., in Ref. (12).

In basic two-layered settings in both the RPS and the analogous standard PS agent models, an action is coupled out after exactly one diffusion step. In order to illustrate a speed-up in such a scenario, we therefore need to consider some additional structure that increases the learning efficiency of the agent, but induces a longer deliberation time. Such a structure can be provided by percept-specific flags, which correspond to rudimentary emotion tags. Flags can be interpreted as the agent’s short term memory, indicating favored actions. In other words, absent flags indicate that a particular choice of action, for a given percept, was not rewarded in the previous step, and should be avoided. More precisely, this structure works as follows. Initially, all the actions are flagged. Then after an action has been coupled out, the flag is removed if the action is not rewarded. If the unflagged action is selected again after encountering the

Figure 8: Invasion game. In the invasion game (12) the agent defends a region of space against an adversary that tries to enter through a series of openings. To be rewarded, the agent is to prevent the adversary from entering, by blocking the passages, which can be achieved if the adversary’s signals, “”, “”, and “”, indicating its next move, are interpreted correctly, and the agent mirrors the adversary’s moves.

same percept in a consecutive round, the deliberation process is repeated until the deliberation results in a flagged action. In the case that the last remaining flag is removed, which indicates a definite change in the setting of the environment, all flags are re-set.

This structure leads to great improvements in settings where the environment (e.g., the adversary in the invasion game) changes its strategy, for instance, by permuting the meaning of the percepts (12). In this case, if the network is already well-taught, the probability of outputting the correct action, once the meaning of percepts has been altered, can be very low. We will be interested in precisely such a setting. Suppose the attacker pursues a consistent strategy for a prolonged period of time, and the agent has learned well. This entails that, for a given percept, one of the values in the distribution , say the third, is much larger than the others, e.g., , and only the action clip corresponding to  is flagged. Now, if the environment is to suddenly change its strategy, no longer rewarding this action, the flag on this clip will disappear, while flags on other clips are introduced again. Subsequently, the agent is required to output the tail of the distribution  with support only over the actions corresponding to  and . However, for the classical RPS model, as well as for the standard PS model, the average number of iterated diffusion steps required until one of the remaining flagged actions is hit is , which can be exceptionally large, if the network was well-taught. The quantum variant of the RPS will then be quadratically faster, only requiring steps. In any given round, the decision-making process after encountering a percept can then be represented on a two-qubit Hilbert space according to Table 1.

clip interpretation two-qubit state
action 
action 
action  ,
Table 1: Representation of three-clip network as two qubits. A two qubit system can represent four clips, but as the desired network only requires three, a redundancy is introduced, e.g. in clip .

Next, we discuss how a rank-one quantum RPS deliberation process based on this two-qubit system can be represented using two trapped ions.

a.3 Rank-One Quantum RPS with Trapped Ions

To implement a rank-one quantum RPS agent for a setting such as the one described above, we construct the two-qubit operations , and , where the latter operation is now a reflection over flagged actions only, from laser pulses on two trapped ions. As we have described in Section III.2, coherent controlization may be employed to assemble the probability unitary , but in this simple case we may resort to a simpler option. As shown in Table 1, we operate on a two-qubit Hilbert space, but we only distinguish between three clips, such that only two independent angles,  and , parameterize the probability unitary . A pulse sequence that achieves this is given by

(21)

where the collective  and single-qubit  pulses are realized by individual laser pulses as described in Section III.1. In terms of the probabilities  and , which we assume correspond to the two flagged actions, the angles and  are given by

(22a)
(22b)

For the implementation of , the reflection over the actions, one simply applies the single-qubit  operation

(23)

Since the rank-one RPS operates solely on one register, the overall phase of the reflection is irrelevant, as long as the relative sign between flagged actions and all other clips is flipped. Finally, we propose the following implementation of . A detuned Mølmer-Sørensen pulse, see Ref. (19), is used to transfer the population of the state , corresponding to , to an auxiliary state . While the state  is hidden in this way, a single-qubit  pulse  flips the sign of all other basis states, before a second Mølmer-Sørensen pulse returns the population to .

Taken together, all operations for one iteration of the Grover-like reflection may hence be realized by  laser pulses. In addition,  individual pulses are needed for the preparation of the initial state. At last, in the next section, we investigate the performance of our ion-trap quantum RPS agent in a series of numerical simulations that incorporate a suitable error model.

a.4 Numerical simulations

For the numerical simulations that we present in this final section, we consider imprecisions in the laser pulse frequency or duration, resulting in varying angles for the laser pulses, as the primary sources of errors. We model such errors by randomly varying the angles for each pulse in the sequence according to a Gaussian distribution with standard deviation  that is centered around the correct value.

In the simulations, we specify a pair of values  and , such that , initialize the corresponding state vector , and apply the combination of the reflections  and a total of  times, where  is chosen randomly from the interval , with  . The clips are then randomly sampled according to the probability distribution

(24)

which corresponds to a measurement in the clip basis. If no flagged action is found, a new number  is generated, and the procedure is iterated until a flagged action has been sampled. For every fixed set of  and  the process is repeated for  runs to build up statistics, out of which  () result in an output of the action clip  (), corresponding to  (). Additionally, the overall number  of calls to the operator  until a flagged action is observed is recorded in each run.

For  the expected scaling as  is largely independent from the error parameter, as can be seen from Fig. 9, since this behavior is governed by the structure of the process, in particular, the upper bound  for the randomly chosen value . The integer steps by which  increases, as  decreases, also explain the step-like pattern visible in the data of Fig. 9. That is, in such a Grover-like scheme, the probability to sample a flagged action grows monotonically with the number of iterations only up to some point, from which on additional applications of the reflections will alternatingly decrease and increase the probability. The average number of repetitions set by the value , which corresponds to a fixed interval of -values, is hence not optimal for all  within that interval, which can be seen from the slanting of the data points, and their standard deviations, in each of the ‘steps’ seen in Fig. 9 (a). The errors partially cover this effect, as can be seen in Figs. 9 (b) and (c).

(a) (b) (c)

Figure 9: Average number of calls to . The results of the numerical simulation for the average number of calls to the probability unitary  until an action clip is hit are shown for error parameters , , and , in Figs. 9 (a), (b), and (c), respectively. Each blue dot corresponds to the average over runs for a fixed value . The vertical gray lines indicate three standard deviations of the mean values (over runs each) in each direction. The solid purple curves show the best fits that are linear in , while the dashed red curves show the best fits that are linear in , and we have confirmed that the former fit the data better than the latter.
Figure 10: Comparison of classical and quantum RPS. Numerical simulations of classical (upper data points) and quantum RPS (lower data points) agents are shown. The data points are obtained as averages over runs for each value of . The fitted curves that are shown are linear in (top orange curve) and (bottom purple curve), respectively. The vertical green and yellow bars indicate three standard deviations of the mean values (over runs each) in each direction.

To illustrate the speed-up of the quantum RPS agent with respect to a classical RPS agent, we directly compare their performance in a simulation without errors, that is, for , see Fig. 10. The classical rank-one RPS agent is emulated here by running the rank-one quantum RPS deliberation process described in this section for , that is, the state is prepared, and a sample is taken, such that clip  is obtained with probability . If no flagged action is obtained, the procedure is repeated.

What remains to be confirmed by the simulations is the output of flagged actions according to the tail of the stationary distribution, as predicted in Ref. (11). We address this question in two ways. First, we evaluate the behavior of a few selected illustrative pairs of probabilities  and  for increasing error parameters in Fig. 11. As a measure for the accuracy of the output, we use the statistical distance

(25)

of the output distribution  and the tailed stationary distribution . In Fig. 12 we then compare the relative frequencies with which the two flagged actions were obtained to the corresponding ratios of the (tailed) stationary distribution, for a broad range of values  and , and for the three error parameters previously chosen used in Fig. 9.

The data shown in Fig. 11 illustrates that large errors result in an output according to a uniform distribution over the flagged actions. The farther the tailed stationary distribution is away from the uniform distribution, the smaller the tolerance for errors. As the stationary distribution is updated throughout the learning process the errors will thus cause a stronger deviation from the desired output distribution.

Figure 11: Statistical distance to tailed distribution. The statistical distance , see Eq. (25), of the output from the tailed stationary distribution is plotted against the width  of the error distribution, for values  (solid) and (dots), and ratios , and  (top to bottom). The dashed horizontal lines indicate the statistical distance to the uniform distribution for each pair , which is approached when the errors dominate the behavior of the agent.

To make these statements more meaningful in terms of learning agents, let us consider a specific example. Let us assume that for a fixed percept, the tailed stationary distribution may be biased towards the action clip , such that an ideal agent outputs this action in  of the cases5. To reach this goal, such an agent updates the corresponding Markov chain throughout the learning process, until the associated stationary distribution is such that . We may then set an error threshold, by assuming that the agent is still considered to succeed, if the action  is performed only  of the time, i.e., a statistical distance of . Brief inspection of the topmost solid curve in Fig. 11 reveals that for  the threshold value corresponds roughly to the largest error, , that we consider in Fig. 9. This, in turn, suggests a maximal number of  coherent iterations of the reflections in the Grover-like process before a measurement is performed, which translates to  individual laser pulses as described in Section A.3.

The initial analysis presented in this appendix suggests that our proposal for the implementation of two-layered quantum RPS agents may be feasible, and be readily implemented in a laboratory as a proof-of-principle demonstration of learning agents enhanced by employing quantum physics.

(a) (b) (c)

Figure 12: Output according to tailed distribution. The plots in Fig. 12 (a), (b), and (c) show the ratios  of the counts in the numerical simulations in comparison with the corresponding ratios  according to the (tailed) stationary distribution, for error parameters , , and , respectively. The solid purple lines show the best linear fits, which should match the diagonal, shown as dashed gray line, in an ideal RPS agent. Each group of data points along a vertical line corresponds to fixed value of , but varying . The data used is in fact the same as that used for Fig. 9.

Footnotes

  1. Technically, since in the standard PS model, the action is coupled out whenever an action clip is hit, the probabilities of transiting from an action clip are undefined. However, we can, for simplicity, assign a unit probability of transiting to itself to each action clip. Thus, action clips are the absorbing states of the underlying Markov chain, although this will not be relevant for our work.
  2. In the last expression we have equated the representations of percepts and actions within the clip network with the actions and percepts themselves, in a slight abuse of notation. In the following, we will be using () to denote the percept (action) clips when the semantics of the clip matters (e.g., whether it is an action or a percept), and the generic notation </