# Boosting computational power through spatial multiplexing

in quantum reservoir computing

###### Abstract

Quantum reservoir computing provides a framework for exploiting the natural dynamics of quantum systems as a computational resource. It can implement real-time signal processing and solve temporal machine learning problems in general, which requires memory and nonlinear mapping of the recent input stream using the quantum dynamics in computational supremacy region, where the classical simulation of the system is intractable. A nuclear magnetic resonance spin-ensemble system is one of the realistic candidates for such physical implementations, which is currently available in laboratories. In this paper, considering these realistic experimental constraints for implementing the framework, we introduce a scheme, which we call a spatial multiplexing technique, to effectively boost the computational power of the platform. This technique exploits disjoint dynamics, which originate from multiple different quantum systems driven by common input streams in parallel. Accordingly, unlike designing a single large quantum system to increase the number of qubits for computational nodes, it is possible to prepare a huge number of qubits from multiple but small quantum systems, which are operationally easy to handle in laboratory experiments. We numerically demonstrate the effectiveness of the technique using several benchmark tasks and quantitatively investigate its specifications, range of validity, and limitations in detail.

## I Introduction

Recent developments in sensing and Internet of Things technology follow big data, which consists of a massive amount of complex time series data. Accordingly, a novel information processing technique that can deal with these data efficiently in real time is eagerly required. Conventional computers are, however, based on the von Neumann architecture, where the processor and memory are separately aligned, and this structure causes an intrinsic limitation in processing speed, which is called the von Neumann bottleneck. Furthermore, the schemes of the von Neumann type models stipulate that to handle complex information processing, the computational system should be also built in a complex manner systematically. While biological systems are complex systems that are constantly exposed to massive sensory data, they perform successful real-time information processing with lower computational costs and energy consumptions. Their way of information processing is a typical non-von Neumann type, capitalizing on its natural and diverse dynamics, and has been a source of inspiration for many researchers IBM_Neuro ().

Reservoir computing is a framework for recurrent neural network training inspired by the way the brain processes information Jaeger0 (); Maass0 (); Reservoir (), and it provides a typical example of a non-von Neumann type computation Neuromorphic0 (). A reservoir computing system consists of a high dimensional dynamical system, called a reservoir, driven by time-varying input streams, which generates transient dynamics with fading memory property and can perform nonlinear processing on inputs. Its framework can be used for real-time information processing with complex temporal structures, which makes it particularly suited to machine learning problems requiring memory, such as speech recognition, prediction of stock markets, and autonomous motor controls for robots. Conventionally, this scheme is implemented through randomly coupled artificial neural networks (i.e., echo state network (ESN) Jaeger0 ()) or through spiking neural networks (i.e., liquid state machine Maass0 ()) in the software program running on a PC. As long as it runs on a conventional PC, the resulting computation is inevitably a von Neumann type. On this basis, the physical implementations of the reservoir have been proposed to exploit the dynamics of native physics for information processing. The implementations include the dynamics of the water surface Bucket (), photonics Laser0 (); Laser1 (), spintronics Spintronics (), and the nanomaterials structured in the neuromorphic chip Neuromorphic0 (). Even the diverse body dynamics of soft robots have been shown to be used as a successful reservoir Kohei1 (); Kohei2 (); Kohei3 (); Kohei4 (), suggesting that this framework could be applied to physical systems in various scales. Recently, quantum reservoir computing (QRC) has been proposed, which implements reservoir computing powered by quantum physics QR ().

Quantum dynamics is difficult to simulate using a conventional or classical computer due to the exponentially large degrees of freedom. This is generally termed a quantum computational supremacy, and the framework of QRC relies heavily on this property of quantum dynamics. Quantum reservoir (QR) dynamics are expressed as transitions of the basis states for quantum bits (qubits) driven by an input stream (Fig. 1A), which evolve over time through a unitary operator based on Hamiltonian. Signals are obtained through projective measurements from the system, called true nodes, which are used as direct reservoir states. An exponential number of degrees of freedom exist behind the measurement called hidden nodes, which affect the time evolution of the true nodes. The framework of QRC naturally takes into account the exponential degrees of freedom of quantum dynamics, which is intractable for the classical computer, for information processing. Furthermore, the framework implements non-von Neumann type computing through a reservoir computing scheme, suggesting the full exploitation of assets from physical quantum dynamics. It has been shown that the QR system can emulate nonlinear dynamical systems, including classical chaos, and exhibit robust information processing against noise QR (). As candidates for the physical experimental platform of the scheme, nuclear magnetic resonance (NMR)-spin ensemble systems NMRQC1 (); NMRQC2 () have been proposed. In these systems, nuclear spins in molecules are used as the ensemble qubit system. Usually, when monitoring a quantum system, its observables are affected by projective measurements, a process called backaction. In the NMR ensemble system, this effect of backaction can be neglected, and the signal can be successfully obtained by averaging the massive amount of copies of molecules existing in the ensemble system.

In this paper, we present a scheme to boost the computational power of QRs. The most prominent and straightforward approach to improve the computational capability of the computational system is to increase its computational nodes. In QRC, this primarily corresponds to increasing the number of qubits. However, when viewed from a physical implementation standpoint (e.g., using an NMR spin-ensemble system), this approach requires a redesign or reconstruction of the sample molecules, which is operationally difficult and is energy and time consuming.

To overcome this problem, we introduce an effective approach to boost the computational power of the system using readily available small sample molecules, which are operationally easy to handle in the experiments. Our scheme is called spatial multiplexing, in which we prepare multiple different small-sample molecules and inject common input streams to each system and use all the signals obtained from these systems as a big single reservoir system (Fig. 1B). This procedure has previously been proposed in the applications of conventional ESNs, and many examples have demonstrated its effectiveness (e.g., Jaeger0 ()). Here, we apply the scheme to QRC and present that its procedure is particularly suited to overcome the difficulty in a physically implemented reservoir setting.

In a software-implemented RC, since the scheme of spatial multiplexing exploits multiple disjoint ESNs as a new reservoir, it is operationally equivalent to assuming a single ESN having the same total number of computational nodes in the first place with a specific sparse internal weight matrix. However, when viewing this scheme from physical RC perspectives, the situations are different. In the NMR-implemented QRC, for example, even if the number of computational nodes are the same, the operational cost of preparing one huge sample molecule and that of preparing multiple small sample molecules are different. By focusing on this operational difference, we can secure the scheme as one of the realistic and practical options to improve the computational power of physical reservoirs, which are often difficult to design freely and easily. In the following sections, we argue the effectiveness of the spatial multiplexing technique for the NMR spin-ensemble system based QRC and quantitatively demonstrate how the scheme improves the computational performance in QRC. We also provide a detailed theoretical explanation of the specifications and range of validity of the scheme, which will be useful for evaluating other reservoir systems.

This paper is organized as follows. In the next section, we overview the formalization of QRC QR () and introduce spatial multiplexing into the setting. Subsequently, we theoretically examine the effect of spatial multiplexing in detail from a general standpoint. We then numerically demonstrate the power of the spatial multiplexing technique on QRC using conventional benchmark tasks in a machine learning context. Several approaches to engineer QRs through spatial multiplexing are also discussed. Finally, its practical aspect, future application domains in solving real world machine learning problems, and its implication to reservoir computing framework in general are discussed.

## Ii Quantum reservoir computing through spatial multiplexing

### ii.1 Quantum reservoir dynamics

Let us consider a quantum state of an -qubit system, which is described by a density operator . By denoting the Pauli operators to be

(1) |

an -qubit Pauli product is defined by -bit string :

(2) |

By using the Pauli products as a basis of the operator space, the quantum state is represented by real vectors:

(3) |

where each element is given in terms of the Schmidt-Hilbert inner product for the operator space as

(4) |

From the properties of the density operators,

(5) |

In QRC, each element is regarded as a hidden node of the network. In quantum mechanics, any physical operation can be written as a linear transformation via a matrix :

(6) |

The matrix can be constructed explicitly from the quantum operation as follows:

(7) |

For example, a unitary dynamics under the Hamiltonian with a time interval is given by

(8) |

In order to exploit quantum dynamics for information processing, we have to introduce an input and the signals of the quantum system (see Fig. 1A). Suppose is an input sequence, which is a continuous variable (). (We consider the setting of one-dimensional input for simplicity, but its generalization to a multidimensional case is straightforward.) A temporal learning task here is to find, using the quantum system, a nonlinear function such that the mean square error between and a target output for a given task becomes minimum. Note that, as we see from Eq. (6), there is no nonlinearity in each quantum operation . Instead, the time evolution can be changed according to the external input , namely , allowing the quantum reservoir to process the input information nonlinearly, by repetitively feeding the input.

Specifically, as an input, we replace the first qubit to the quantum state (Fig. 1A)

(9) |

Corresponding matrix is given by

where indicates a partial trace with respect to the first qubit. A unit time step is written as an input-depending linear transformation:

(10) |

where indicates the hidden nodes at time .

A set of observed nodes, which we call true nodes, is defined by a matrix ,

(11) |

The number of true nodes has to be a polynomial in the number of qubits . That is, from exponentially many hidden nodes, a polynomial number of true nodes are obtained. For simplicity, we take the single-qubit Pauli operator on each qubit as the true nodes, i.e.,

(12) |

Therefore, there is true nodes. Figure 2A shows the typical reservoir dynamics driven by the input stream , and they consist of signals obtained from the true nodes. Here we assume that the system is an ensemble quantum system, which consists of a huge number of copies of single quantum systems. Therefore, the signals from the true nodes are obtained without any backaction. Actually, the NMR spin ensemble system is such a system. A sample of an NMR spin-ensemble system contains typically copies of the same molecules. The magnetization of spins out of the sample can be measured with an RF coil with a sufficient SN ratio, while the remaining is not affected.

The unique feature of QRC in the reservoir computing context is that the exponentially many hidden nodes that originate from the dimensions of the Hilbert space are monitored from a polynomial number of signals defined as the true nodes. Based on this setting, in the next section, two coordinated schemes are introduced to harness QR dynamics in a physically natural setting. The first is called temporal multiplexing, which was already introduced in Ref. QR (); TM_note (), and the second is called spatial multiplexing, which is a procedure applied to the QRC from this study.

### ii.2 Temporal multiplexing

In Ref. QR (), temporal multiplexing has been found to be useful to extract complex dynamics on the exponentially large hidden nodes through the restricted number of true nodes. In temporal multiplexing, the signals are sampled from the QR not only at the time , but also at each of the subdivided time intervals during the unitary evolution to construct virtual nodes, as shown in Fig.2B (the upper diagram). After each input by , the signals are obtained for each subdivided intervals after the time evolution by (), i.e.,

(13) |

Accordingly, as the QR system has true nodes, we have corresponding computational nodes at each input timestep in total, and the virtual nodes are defined by

(14) |

This procedure allows us to make full use of input-driven transient dynamics, which can potentially include the influence of hidden nodes. Using this technique, it is possible to effectively increase the total number of computational nodes employed in the learning process. A similar technique can also be found, for example, in Ref. Laser0 () under the same motivations.

It is important to note that, as is obvious from the setting, the parameter modulates directly the dynamics of QR, while the parameter defines how we observe the dynamics. In Ref. QR (), the relevance of these parameters to the computational capability of the QR system is investigated. It was observed that, according to the choice of the parameter , the type of computation that can be performed well has changed, and the increase in the parameter essentially contributes to an improved computational performance.

### ii.3 Spatial multiplexing

Now, we consider boosting the computational power in QRC further. The most straightforward and promising approach that comes to mind would be increasing the number of computational nodes. This naturally leads to an increase in the number of qubits in the QR system. (The approach of temporal multiplexing, which secures virtual nodes from the signals, is also reasonable in terms of increasing the number of computational nodes.) Considering the physical implementations of QRC to the NMR system, however, as we explained previously, this procedure of increasing the number of qubits corresponds to the enlargement and redesign of sample molecules, and it is not always easy in practice. In the NMR system, the local control and measurement of a qubit is accomplished with the difference of the resonant frequency. The resonant frequency differs from the nuclear species. Among many species, only few species such as H, C, N, and F are easy to handle and thus used as qubits before. The resonant frequency is also slightly shifted due to the difference of the chemical environment even with the same species, which enable us the local control of them. However, it is not easy to design and synthesize a molecule that includes many addressable spins with the different species and environment. Since a 12 addressable spin system in a liquid has been developed in 2006 Negoro1 (), the record still remains unbroken.

In this study, based on these physical constraints of the experimental settings, we introduce an effective and practical procedure to increase the computational resource, which is relatively easy to implement under the practical condition. The procedure is called spatial multiplexing. We prepare multiple disjoint QR systems, which are spatially distant or uncoupled, and we drive them with a common input stream in parallel (Fig. 1B). Subsequently, we collect the signals from each QR system in the previously explained manner, and we use all of these signals from different QR systems as one entire set of reservoir dynamics. For the NMR-implemented QRC, this approach enables the exploitation of readily available sample molecules, which already exist in the laboratory, to increase the number of computational nodes. Compared to redesigning the sample molecules as a computational resource, this approach should be relatively handy and practical for experimenters. For example, the aforementioned 12 qubit molecule and another one developed in 2017 Negoro2 () can be potentially utilized for spatial multiplexed reservoirs (Fig. 1C). To synthesize other 12 qubit molecules based on the developed molecules with chemical modifications may be easier than a molecule with more qubits.

Let us consider different QRs driven by a common input stream in parallel. For each QR system , which has qubits with a corresponding number of true nodes, the time interval to inject input and the corresponding unitary evolution can be set differently (Fig. 2B). Each QR system is also equipped with temporal multiplexing having virtual nodes (Fig. 2B). As a result, the spatial multiplexing induces nodes in total, and these computational nodes are exploited as a single reservoir. We investigate systematically whether the procedure of spatial multiplexing really boosts the computational power of the QRC or not and, furthermore, to what extent it improves the performance in detail in the later sections.

### ii.4 Output settings and learning procedure

In the reservoir computing approach, the output is obtained as a weighted sum of the reservoir states, and the learning of a target function is executed by training linear and static readout weights attached to the reservoir nodes in a supervised manner. Here, we explain how to train the readout weights from the observed signals of QR after the procedures of temporal and spatial multiplexing. According to the previous sections, temporal and spatial multiplexing introduces computational nodes in total (Fig. 2B). The state of the computational node at timestep is expressed as by rearranging the subscript from the original, and we introduce a constant bias term . The system output of the system is then expressed as

(15) |

where is a linear and static weight attached to node . Let be the target sequence for learning, where is the length of the training phase that is assumed much greater than and the training of the readout weights is to minimize . By collecting the target output and the corresponding reservoir states in the learning phase as the training data matrix , which is a matrix, the optimal weight can be obtained as a least squares solution .

As we see later in detail, when we actually let the QR system perform the computational tasks in this study, the experimental trial consists of a washout phase, training phase, and evaluation phase. The washout phase is to eliminate the influence of initial transients of the reservoir states, and the trained readout weights in the training phase are exploited to generate outputs in the evaluation phase.

### ii.5 Theoretical insights into the effect of spatial multiplexing

In this section, we investigate theoretically the effect of spatial multiplexing, and we show its range of validity and limitations. The argument in this section is not limited to quantum system but is generally applicable to any reservoir system. Initially, we prove concisely that the procedure of spatial multiplexing always improves computational performance (or, at worst, will not change the performance).

Let us assume that we have two reservoirs, reservoirs A and B, which have and computational nodes, respectively. Consider the corresponding regression equations, and , where is a matrix and is a matrix with realizations satisfying , and and are residuals. We assume that and are full rank, and and are least squares solutions expressed as and , respectively. With projectors and , and . Accordingly, the residuals can be expressed as = and = . We consider combining reservoirs A and B and constructing a new reservoir “A+B.” Similarly, for , , where is a least squares solution expressed as and is a residual. We assume that is full rank. With projectors , , and a residual can be expressed as = . Because is a least squares solution,

The equal sign can be used only when . Likewise, and holds. Actually, this relation shows the reason why the performance improves by increasing the computational nodes in the system in general. It also suggests that the couplings and interactions within the reservoir are not explicitly required for this improvement in theory.

Estimating the upper limit of the improvement in performance in terms of how the residual decreases by adding the reservoir B to the reservoir A is also possible. In general, projector satisfies , , and if is a projector, then is also a projector. Applying these properties to the above results, with a few transformations, we obtain

where is an inner product. This relation suggests that is positive semidefinite, and the largest eigenvalue is

Thus,

where expresses the supremum of the reductions of a normalized residual when adding reservoir B to reservoir A. Similarly, we obtain , where is also positive semidefinite and is the largest eigenvalue of . Using the above relations for and , and , we obtain

where . Thus, we can evaluate and predict the extent of the improvement without actually performing the task using reservoir A+B.

We should be careful because the above facts do not always hold in practice. Two points should be noted. The first is overfitting. Spatial multiplexing can increase computational nodes drastically, so we should be careful when balancing between the size of training data set and the system size. Since spatial multiplexing always results in an improved performance for training data set, if the performance worsens with spatial multiplexing in the evaluation phase, we can infer back that it is caused by overfitting.

Second, the above facts are based on the assumption that , , and are full rank. This condition does not always hold in practice. A typical example is a case in which synchronization occurs, which makes the reservoir dynamics identical or low-dimensional. Notably, even if no coupling exists between the reservoirs in spatial multiplexing, the synchronization can still occur. This is a phenomenon often called common input synchronization EdgeofChaos (), or when the driving input is a random signal, it is called common noise synchronization CommonNoise (). Ironically, as investigated in EdgeofChaos (), the property of common input synchronization is rather a required property for reservoirs in terms of the reproducibility of the signals (the opposite case is chaotic dynamics). For robust information processing, the same reservoir is preferred to respond the same according to the identical input stream, even if the initial states of the reservoir differ. For the scheme of spatial multiplexing, however, this property acts as a drawback that avoids the duplication of the same reservoir in use. Accordingly, for spatial multiplexing, preparing a different reservoir or the same setting of the reservoir with different input scaling or with a different choice of qubit for input injections is recommended.

## Iii Performance analyses

In this section, we use numerical experiments to investigate the effect of spatial multiplexing. By assessing the memory capacity and by using a benchmark task that evaluates the information processing capability to emulate nonlinear dynamical systems called nonlinear auto-regressive moving average (NARMA) systems, we demonstrate how the order of spatial multiplexing affects the performance of our QR system systematically. These evaluation schemes adopted here are popular in the context of recurrent neural network learning.

For the dynamics of QR system, we employ the simplest quantum system, a fully connected transverse-field Ising model, as an example:

(16) |

where the coupling strengths are randomly assigned such that is distributed randomly from to . Furthermore, a scale factor is introduced to make and dimensionless. In our numerical experiments, quantum dynamics of the above Hamiltonian is exactly calculated without employing any approximation.

Here, the spatial multiplexing is implemented using QR systems having the same number of qubits , the input interval , and the virtual nodes (which we simply denote , , and , from now on) but with different random coupling strengths of . In the following, we see the case when the number of qubits of a single QR system, which implies the case without spatial multiplexing, is set to . As an example, we demonstrate in detail when the parameter is set to 1 and 2 for the memory capacity analyses and for the NARMA tasks, respectively. We varied the number of the virtual nodes as 1, 5, and 25 and checked the dependence on the performance. (Note that the analyses for the different parameter settings, such as the cases for and and , are given in the Appendix.) Throughout the following experiments, the input stream is randomly drawn from the range and injected to the 1st qubit of each QR system. The order of spatial multiplexing, which is defined as the number of QR system driven by a common input stream in parallel, is varied from 1 (without spatial multiplexing) to 5 for the analyses.

### iii.1 Memory Capacity

As discussed earlier, the information processing capability of reservoir dynamics can be characterized by its property of transforming the input stream. In particular, one of the important characteristics for the computational systems in solving a temporal machine learning task is short-term memory, which is a property to store information of recent inputs to the system’s current states. Focusing on this point, a measure to evaluate the system’s short-term memory property, which is called memory capacity Jaeger3 (), is commonly used. In this section, we aim to analyze the memory capacity of the QR system and to quantify the effect of the spatial multiplexing in terms of it. To calculate the measure, the computational system should first learn to reproduce the injected random input of timesteps before using the current states of the system. This is equivalent to setting the target output as , where is set as a random sequence ranged in in this study.

To evaluate the system’s emulatability of the target sequence, the memory function is defined as

(17) |

where and express the covariance between and and the standard deviation of , respectively. This measure can take the value from 0 to 1, and as the value gets larger, it suggests that the system’s capability to reconstruct the previous input gets higher. The memory capacity is defined as follows:

(18) |

As explained in the earlier section, the training scheme of our QR system is based on supervised learning, and for each setting of , the experimental trial consists of a washout phase (2,000 timesteps), a training phase (2,000 timesteps), and an evaluation phase (2,000 timesteps). Using the time series data of 2,000 timesteps in the training phase and the linear regression explained in Section II.4, we optimize the readout weights, which we use to calculate the corresponding system output in the evaluation phase. For each order of spatial multiplexing, we iterated the above procedure by using new QR systems with different random coupling strengths for 100 trials and obtained the averaged and .

Figure 3A shows the averaged over the input delay . By observing the behavior of against delay , we can see that, according to the increase of the order of spatial multiplexing, the performance gradually improves, showing the relatively large value of in the region of the larger delay. This tendency can be captured more clearly in the behavior of (Fig. 3B). Figure 3B plots how the order of spatial multiplexing affects the memory capacity of the QR system in each setting of virtual nodes. We can observe that, in all cases, the increase of the order of spatial multiplexing induces the improvement of memory capacity. Even in other parameter regions (e.g., for different settings of the parameter and the number of qubit) of the system, the improved memory capacity, according to the increased order of spatial multiplexing, was generally observed (see Fig. 8 in Appendix A for details). Interestingly, according to the setting of the parameter , the amount of memory capacity that can be induced was different (Fig. 8 in Appendix A). That is, the memory capacity reaches relatively larger values when and than the other settings of .

### iii.2 NARMA tasks

The NARMA task is a commonly used benchmark task for evaluating the computational capability of the learning system to implement nonlinear processing with long time dependence. By calculating the deviations from the target trajectory in terms of errors, the NARMA task tests how well the target NARMA systems can be emulated by the learning system. According to the choice of the target NARMA system, it is possible to investigate which type of information processing can be performed in the learning system to be evaluated. The first NARMA system that we introduce is a second-order nonlinear dynamical system, which was used in benchmark (), expressed as follows:

(19) |

We call this system NARMA2 in this paper. The next NARMA system is the th-order nonlinear dynamical system, which is written as follows:

(20) |

where and are and , respectively. Here, varies as and , and the corresponding systems are called NARMA5, NARMA10, NARMA15, and NARMA20, respectively. In particular, NARMA10 is frequently used in the context of evaluating the learning capability of recurrent neural networks (e.g., benchmark (); Reservoir ()). Here, we adopt the multitasking scheme, where the system should simultaneously emulate all the NARMA systems according to the input stream. For the input stream to the NARMA systems, the range is linearly scaled from to to set the range of into the stable range.

The learning scheme of our QR system is exactly the same as explained in the previous MC analysis. Each experimental trial consists of a washout phase (2,000 timesteps), a training phase (2,000 timesteps), and an evaluation phase (2,000 timesteps). We evaluate the performance by comparing the system output with the target output, which is the normalized mean squared error (NMSE), expressed as follows:

(21) |

where and are the target output and the system output at timestep , respectively. For each setting, NMSEs for all the trials are calculated and averaged for the analysis. For each order of spatial multiplexing, we iterated the above procedure by using new QR systems with different random coupling strengths for 100 trials and obtained the averaged NMSE.

Figure 4 shows the typical output time series for the NARMA tasks in the evaluation phase. First, it is clearly observed that according to the increase in the order of the NARMA system, the overall task performance gradually worsens, reflecting an increase of the difficulty of the tasks. For each NARMA task, according to the increase of the order of spatial multiplexing, we can see that the traceability of the QR system is improved (we can visually confirm this especially for the NARMA5, NARMA10, and NARMA15 tasks in Fig. 4). These observations can be quantitatively confirmed in the analyses of the averaged NMSE in Fig. 5. For each setting of the number of virtual nodes (V=1, 5, and 25), the figure plots how the averaged NMSE behaves according to the increase of the order of spatial multiplexing in each NARMA task. Figure 5 shows that for all the NARMA tasks, the increase of the order of spatial multiplexing induces improvements in the task performance. In particular, when the order of the NARMA system is 2, 5, and 10, the effect of the increase of the order of spatial multiplexing is significantly high. We have checked that this tendency of the effect generally holds for other parameter settings of the QR system (see Fig. 9 in Appendix A for details). Furthermore, we have found that, for each NARMA task, a different setting of exists that shows the best performance through spatial multiplexing. For example, in the case for the NARMA2 task and NARMA5 task, the averaged NMSE shows the minimum value when , while in the case for the NARMA15 task and NARMA20 task, shows the minimum, both through spatial multiplexing of order 5 (Fig. 9 in Appendix A). These findings imply that the parameter can regulate which type of task the QR system is good at.

### iii.3 Temporal versus spatial multiplexing

Sections III.1 and III.2 have demonstrated that, as the order of spatial multiplexing increases, the memory capacities increase and the performance of the NARMA tasks improves. In this section, we analyze the extent to which the order of spatial multiplexing plays a part in these improvements quantitatively.

Figure 6 plots how the improvement ratio behaves according to the increase of spatial multiplexing in each experimental case. The improvement ratio is defined by setting the performance (in terms of the averaged NMSE or MCs) when the order of spatial multiplexing is set to 1 as a basis. For both analyses, it is calculated by dividing each averaged MC and NMSE by those when the order of the spatial multiplexing is 1 in each parameter setting, respectively. As a comparison, the improvement ratios when the number of virtual nodes is increased from 1 to 5, from 1 to 25, and from 5 to 25, without spatial multiplexing (reflecting the effect of temporal multiplexing only), are shown in each plot of Fig. 6. We can clearly observe that in almost all cases, the increase of the order of spatial multiplexing induces the improvements of the performance.

For the memory capacity, we can observe remarkable improvements, where the improvement ratio marks more than twice in all of the setting of virtual nodes, when increasing the order of the spatial multiplexing from 1 to 5 (Fig. 6, left diagram). In particular, the effect of increasing the order of spatial multiplexing from 1 to 5 with the virtual node fixed to 1 () is similar or slightly superior to that of increasing the number of virtual node from 1 to 5 without spatial multiplexing () in terms of capacity (Fig. 6, left diagram). These features were commonly observed in all the experimented parameter settings in this study. For example, when the order of spatial multiplexing was varied from 1 to 5, the averaged improvement ratio of the memory capacity calculated using all the parameter settings was 2.11, and the maximum improvement ratio among all was 3.23 when , , and (Fig. 7, upper left diagram). Furthermore, we observed “” in almost all the parameter settings (Fig. 7, upper right diagram), which characterizes the range of effectiveness of the spatial multiplexing.

For the NARMA task, by increasing the order of spatial multiplexing, the value of the improvement ratio is decreased, suggesting the improvements of the performance (Fig. 6, right diagrams). In particular, in the NARMA5 task when , the value decreased by a factor of 10 when the order of spatial multiplexing was varied from 1 to 5. Interestingly, this improvement ratio is much superior to that of increasing the virtual node from 1 to 25 despite the larger increase in the computational nodes, which implies that cases exist in which the increase of the order of spatial multiplexing behaves superior to that of temporal multiplexing. These tendencies follow in all the experimented parameter settings in this study. The performance of each NARMA task improves in each parameter setting by increasing the order of spatial multiplexing (Fig. 7, lower left diagram). For example, when the order of spatial multiplexing was varied from 1 to 5, the averaged improvement ratio calculated using all the parameter settings was 0.39 in the NARMA5 task, and the minimum improvement ratio among all was 0.15 when , , and in the NARMA5 task (Fig. 7, lower left diagram). Similarly to the case for the memory capacity, in each NARMA task, we observed “” in almost all the parameter settings (Fig. 7, lower right diagram). These results suggest that in some cases, spatial multiplexing adds a more effective number of computational nodes than does temporal multiplexing.

## Iv Toward engineering quantum reservoir through spatial multiplexing

As we saw in Section II.5 and demonstrated in III, spatial multiplexing improves the performance of the system. In this section, we provide a few notes on the possibility to engineer QR through the spatial multiplexing scheme. As we discussed in Section II.5, although we can improve performance by increasing the order of spatial multiplexing in theory, this does not always apply in actual experiments because of overfitting. In such cases, limiting the number of computational nodes is preferable. Given a fixed number of computational nodes, we investigate in this section a method to engineer the efficient combinations of reservoirs.

Similar to Section II.5, let us assume that we have three reservoirs, A, B, and C, with reservoir A having computational nodes, and reservoirs B and C having the same number of nodes . We also assume that these reservoirs satisfy the basic properties of the regression equation setting and least squares solutions presented in Section II.5. At first, the inequality suggests that combining reservoirs A, B, and C performs best if we could avoid overfitting in practice. Now, by retaining the total number of nodes fixed to , we determine the better choice between reservoir B or C for combination with reservoir A to improve performance.

At first glance, choosing the reservoir that has better performance is preferable. However, this is not always the case. Given that reservoir B has better performance than reservoir C, that is, , and hold, but does not hold in general. (We can easily find a counter example such as , , , .) We then apply the relations we obtained in Section II.5 to reservoir A+B and A+C, which are

and

Given , to evaluate and , we need to check how these ranges overlap. Only if no overlap exists, can we safely predict the reservoir to add without actually performing the task. When , these two ranges always overlap because and . When or , and if holds, then we can safely decide to choose reservoir B as the appropriate partner for combination without actually performing the task, because it satisfies and accordingly holds.

Furthermore, although we demonstrated spatial multiplexing by combining QR systems that have different coupling strengths with the other parameters fixed in our numerical experiments, note that the combination can consist of any reservoirs if they are not synchronized. The choice of the combination depends on the efficiency of the usage in each experimental setting. For example, parameter is dependent on the energy applied to the experimental platform and can be regulated if we consider energy efficiency.

## V Discussion

In this paper, we have introduced a scheme, spatial multiplexing, to boost the computational power in QRC. Considering the physical experiment, this scheme is operationally easy to implement but is remarkably effective, and we have theoretically shown that the scheme inevitably increases the computational power. The effect was demonstrated through numerical experiments using a number of benchmark tasks, and the performance of learning was observed to be improved. We have also examined the theoretical implications of the proposed scheme and discussed its range of validity and limitations, which would be useful and applicable not only for QRC but also for reservoir computing in general, including the case for conventional software implementations.

Although the scheme of spatial multiplexing allows us to efficiently increase the computational nodes, we should be sensitive to the case of overfitting in practical applications. In our experiments, we observed several performances, which were thought to be caused by overfitting (e.g., the results of NARMA tasks in higher values of (Fig. 9 in Appendix A)). To avoid these situations, one can introduce a Ridge regression or Lasso for the training procedure, which assigns a penalty to readout weights for regressions. By combining with these sparse regressions, one can establish a scheme to selectively exploit effective degrees of freedom from massive computational nodes increased by spatial multiplexing.

NMR ensemble system has been regarded as a strong candidate for physical platform of QRC. In NMR quantum reservoir system, the spatial multiplexing with some different molecules, introduced in Sec.II.3, is an easier option to increase the computational power than increasing the number of addressable qubits. Another option is increasing the number of unaddressable qubits and virtual nodes, which will be introduced with a detail in our future work. We can also introduce an easier implementation of spatial multiplexing even with the same molecule with NMR pulse techniques to change the interaction Hamiltonian effectively Negoro3 (); Negoro4 (). The pulse techniques are often utilized for quantum simulation experiments. For example, Ising type Hamiltonian can be changed to for any parameters , with applying the multiple pulse sequence with a parameter for spacing between pulses, Negoro5 (). It was shown in a quantum simulation experiment Negoro6 () that the dynamical behavior of a nuclear spin system with the interactions are substantially different depending on . Just with changing the parameter of applying pulse, we can easily implement the spatial multiplexing with some different quantum dynamical systems in the same molecule.

Spatial multiplexing will offer an opportunity to increase the computational nodes and boost the computational power not only for QRs but also for any interacting systems that contain components that are operationally or experimentally difficult to manipulate and increase. By extending this line of thought, we can develop a concept of composing multiple reservoirs, each with different physical systems. For example, it might be worth composing photonic and quantum systems and treating them as one entire reservoir in some applications. According to how this scheme is applied in the real world, this concept would create options from which to flexibly choose the physical systems to use as a computational resource in a given situation.

Finally, one of the intriguing flavors in the framework of QRC is its exploitation of the quantum computational supremacy region, where the system possesses exponential degrees of freedom as hidden nodes. We reiterate that, as spatial multiplexing increases true nodes proportionally to its order, its increase of hidden nodes is also proportional, while increasing the number of qubits in the interacting system will directly lead to exponential increase in hidden nodes. This fact implies that, even if we have the same number of true nodes, the number of hidden nodes can differ according to how the spin-ensemble molecular samples were prepared; hence, the computational power and preference would also differ. We suggest each experimenter to regulate how to prepare their reservoirs based on their given experimental conditions and their operability of the system, and we believe that the spatial multiplexing technique will become one of the common and practical options for boosting the computational power of QRs in the near future.

## Vi Acknowledgements

K.N. would like to acknowledge Taichi Haruna for fruitful discussions and Hiroki Masui for his help in arranging figures. K.N., K.F., and M.N. are supported by JST PRESTO Grant Number JPMJPR15E7, JPMJPR1668, and JPMJPR1666, Japan. K.M is also supported by JST PRESTO Grant Number JPMJPR1666, Japan. K.N. is supported by KAKENHI No. 16KT0019, No. 15K16076, and No. 26880010. K.F. is supported by KAKENHI No.16H02211, JST ERATO Grant Numnber JPMJER1601, and JST CREST Grant Number JPMJCR1673.

## Vii Author Contributions

K.N. and K.F. designed and conducted numerical experiments. All authors discussed the results and implications and wrote the manuscript at all stages.

## Viii Author Information

There is no competing financial interests.

## Appendix A Extended numerical experiments and analyses

In the main text, we showed the results of the numerical experiments for the QR system with its system parameters set to and . In this section, we show thorough and systematic analyses of different parameter settings, varying as 3, 4, and 5 and varying as and , which are summarized in Fig. 8 and Fig. 9.

## Appendix B Echo state network settings for comparisons

To characterize the computational capability of our system, in the main text, we compared its performance of the NARMA tasks and its memory capacity with those of a conventional ESN Jaeger0 (); Jaeger1 (); Jaeger0SI (). This section explains in detail the settings of the ESN used for the comparisons.

The ESNs are a type of random recurrent neural network that consists of internal computational nodes (the number of internal computational nodes is denoted as ), input nodes, and output nodes. The activation of the th internal node at timestep is expressed as . The weights for the internal network connect the th node to the th node, and the input weights connect the input node to the th internal node. Internal computational nodes with one bias are connected to the output unit through readout weights , where and is assigned for the bias term. Learning of the readout weights is performed using the same procedure explained in the main text for each task. The internal weights are randomly determined from the range , and the spectral radius of the weights is regulated according to the setting for each task, as explained below. Similarly, the input weights are randomly determined from the range , where is a scaling parameter explained later. The time evolution of the ESN is expressed as follows:

(22) | ||||

(23) |

where is set as in this paper. To make a fair comparison of the task performance, the I/O setting of the ESN was set to be the same as that of our system for each task. For example, the lengths of the washout, training, and evaluation phases and the evaluation procedures were kept the same. The detailed experimental conditions are given for each of these comparisons below.

For the NARMA task, we first prepared 10 different ESNs for each setting of , which vary as 5, 10, 20, 30, 40, 50, 100, 150, 200, 250, and 300. The scaling parameter of the input weights is varied as 1.0, 0.5, 0.2, 0.1, 0.05, 0.01, 0.005, and 0.001, and the spectral radius of the internal weights is also varied from 0.1 to 2.0 in increments of 0.1. For each ESN, by fixing the spectral radius and the parameter , we ran 10 different trials, driven by different random input sequences, and test the emulation tasks of all the NARMA systems (NARMA2, 5, 10, 15, and 20) using a multitasking scheme for each trial. After performing all the trials of the NARMA tasks with all the parameter settings varied for each ESN having the computational node , we collected the lowest NMSE, which indicates the best performance in this experiment corresponding to the ESN, and calculated the averaged NMSE for each NARMA task over 10 different ESNs for each setting of . These averaged NMSEs were used for comparison.

To evaluate the memory capacities, 100 different ESNs were driven by different random input sequences with a spectral radius fixed at 0.9 and the scaling parameter of the input weights fixed to . The emulation tasks of 5 dynamical systems with different degrees of nonlinearity, which are explained in the main text, were performed for each trial using a multitasking scheme. Analyses of the performance were conducted using the same procedures used by our system and defined in the main text, and the averaged capacities were calculated using these 100 trials and used for comparison.

## References

- (1) P.A. Merolla et al., Science 345, 668 (2014).
- (2) H. Jaeger and H. Haas, Science 304, 78 (2004).
- (3) W. Maass, T. Natschläger, and H. Markram, Neural Comput. 14, 2531 (2002).
- (4) D. Verstraeten, B. Schrauwen, M. D’Haene, and D. Stroobandt, Neural Netw. 20, 391 (2007).
- (5) A. Z. Stieg et al., Adv. Mater. 24, 286 (2012).
- (6) C. Fernando and S. Sojakka, Pattern recognition in a bucket In Lecture Notes in Computer Science 2801, p. 588 (Springer, 2003).
- (7) L. Appeltant et al., Nat. Commun. 2, 468 (2011).
- (8) L. Larger et al., Optics Express 20, 3241 (2012).
- (9) J. Torrejon et al., Nature 547, 428 (2017).
- (10) K. Nakajima et al., Front. Comput. Neurosci. 7, 1 (2013).
- (11) K. Nakajima et al., J. R. Soc. Interface 11, 20140437 (2014).
- (12) K. Nakajima et al., Sci. Rep. 5, 10487 (2015).
- (13) K. Nakajima et al., Soft Robotics (in press).
- (14) K. Fujii and K. Nakajima, Phys. Rev. Applied 8, 024030 (2017).
- (15) D. G. Cory et al., Fortschr. Phys. 48, 875 (2000).
- (16) J. A. Jones, Prog. Nucl. Magn. Reson. Spectros. 59, 91 (2011).
- (17) In QR (), the procedure of temporal multiplexing is called time multiplexing, but is the same thing. We used the term “temporal” to coordinate with the term “spatial” in this paper.
- (18) C. Negrevergne et al., Phys. Rev. lett. 96, 170501 (2006).
- (19) D. Lu et al., npj Quantum Information 3, 45 (2017).
- (20) N. Bertschinger and T. Natschläger, Neural Comput. 16, 1413 (2004).
- (21) R. Toral, C. R. Mirasso, E. Hernández-Garcia, and O. Piro, CHAOS 11, 665 (2001).
- (22) H. Jaeger, GMD Report 152, German National Research Center for Information Technology (2001).
- (23) A. F. Atiya and A. G. Parlos, IEEE Trans. Neural Netw. 11, 697 (2000).
- (24) U. Haeberlen, High resolution NMR in solids — Selective averaging, in Advances in Magnetic Resonance Suppl. 1 (Academic Press, New York, 1976).
- (25) L. M. Vandersypen and I. L. Chuang, Reviews of Modern Physics 76, 1037 (2005).
- (26) G. Roumpos, C. P. Master, and Y. Yamamoto, Phys. Rev. B, 75, 094415 (2007).
- (27) G. A. Alvalez, D. Suter, and R. Kaiser, Science 349, 846 (2015).
- (28) H. Jaeger, GMD Report 159, German National Research Center for Information Technology (2002).
- (29) H. Jaeger, GMD Report 148, German National Research Institute for Computer Science (2001).