Topological defect formation in 1D and 2D spin chains realized by network of optical parametric oscillators

# Topological defect formation in 1D and 2D spin chains realized by network of optical parametric oscillators

Ryan Hamerlyaaarhamerly@stanford.edu , Kensuke Inaba, Takahiro Inagaki, Hiroki Takesue,
Yoshihisa Yamamoto and Hideo Mabuchi
###### Abstract

A network of optical parametric oscillators is used to simulate classical Ising and XY spin chains. The collective nonlinear dynamics of this network, driven by quantum noise rather than thermal fluctuations, seeks out the Ising / XY ground state as the system transitions from below to above the lasing threshold. We study the behavior of this “Ising machine” for three canonical problems: a 1D ferromagnetic spin chain, a 2D square lattice, and problems where next-nearest-neighbor couplings give rise to frustration. If the pump turn-on time is finite, topological defects form (domain walls for the Ising model, winding number and vortices for XY) and their density can be predicted from a numerical model involving a linear “growth stage” and a nonlinear “saturation stage”. These predictions are compared against recent data for a 10,000-spin 1D Ising machine.

Keywords: Ising model; topological defect; phase transition; optical parametric oscillator.

## 1 Introduction

Many important problems in computer science can be solved by message-passing algorithms. In such algorithms, information lives on the nodes of a graph, while computation consists of updating the values of the nodes by passing “messages” along the graph’s edges. Examples of such algorithms include neural networks, probabilistic graphical models, low-density parity check codes and topological surface codes. Message-passing algorithms are advantageous because they are intrinsically parallel, making them straightforward to implement on multi-core architectures.

As digital microprocessors reach their physical limits, there has been a surge of research into special-purpose hardware for various message-passing algorithms. In electronics, examples include CMOS artificial neural networks and CMOS chips for simulated-annealing. Quantum annealers have a similar graphical architecture, with data stored at the vertices (qubits), while pairwise couplings along the edges transmit information along the graph.

This paper focuses on a coherent optical network, which functions as a message-passing algorithm to solve the Ising problem and the related XY problem. These problems consist of finding the global minimum of the Ising potential , where

 U(σ)=−12∑ijJij→σi⋅→σj (1)

In (1), is the coupling between spins , . The spins have unit norm . For the Ising problem and ; for the XY problem and . Higher-dimensional problems () can also be defined, but will not be considered here.

The general Ising problem is NP-hard, but algorithms based on convex relaxation or heuristics can give approximate solutions in polynomial time. A number of schemes have been studied to map such algorithms directly onto electronic or photonic circuits.

The coherent Ising machine is a network of identical nonlinear gain elements symmetrically coupled by optical injection that solves the Ising problem by a minimum-gain principle. According to this principle, if the couplings are chosen to implement the potential , the configuration that oscillates should minimize the potential (1). For the nonlinear gain, an injection-locked laser or an optical parametric oscillator (OPO) can be used. In practice, the spins in the machine are time-multiplexed as pulses in a synchronously-pumped laser or OPO and couplings are realized by delay lines that couple pulses at different locations in the cavity (Fig. 1).

The Ising machine was proposed as an injection-locked laser network. Later, the theory was extended to OPOs and simulations showed promising performance on MAX-CUT Ising problems of size . Experimental results followed for an OPO network and an network, as well as simulations for G-set graphs up to .

In this paper, we analyze the OPO Ising machine for solving the simplest class of Ising problems: 1D and 2D ferromagnetic chains. Although these problems are trivial in the sense that the solutions are well-known, the analytic theory one can derive gives the reader a more lucid understanding of how the Ising machine actually works. Because of their simplicity, 1D and 2D models may serve as a good way to “benchmark” the performance of different Ising machines. Moreover, they are one of the simplest systems to realize in the laboratory, requiring only one delay line, allowing for direct comparison between the theory and currently realizable experiments. As a model experimental system, we use the four-wave mixing fiber OPO implemented in our previous paper.

Section 2 covers the theory of the time-multiplexed OPO Ising machine. Based on this theory, we derive semiclassical equations of motion for the OPO pulse amplitudes. In the original formulation of the Ising machine as a network of continuous-wave OPOs, these are stochastic differential equations, but for the pulsed case we show that they become difference equations, relating the pulse amplitudes between successive round trips.

These equations are solved in Sec. 3, where we show that the dynamics breaks down into two stages: a growth stage where the field amplitudes are well below threshold and growth is linear, and a saturation stage where the OPO amplitudes saturate, giving rise to nonlinear dynamics defined by domains and domain walls. Using this picture, in Sec. 4 we derive expressions for the correlation length, domain-wall density and domain-length histogram for Ising machine solution. This is compared to experimental data from the fiber OPO of Inagaki et al.; we show that our theory matches the experimental results, while a simple thermal Ising model does not.

Sections 5-7 explore more complex systems that have not yet been realized in OPO experiments. In Sec. 5, the two-dimensional lattice is treated. The same growth / saturation stage picture applies, but during the latter we find 2D domains separated by 1D domain walls which move towards their center of curvature and collapse in a time quadratic in the domain size. XY models are treated in Sec. 6-7, where the basic equations are introduced and applied to 1D and 2D systems. Instead of domains, the XY model gives winding-number states for the 1D chain and vortices for 2D. These vortices resemble those from Berezinskii-Kosterlitz-Thouless theory, but they are generated by a non-thermal mechanism, and so their distribution is also athermal.

## 2 Fiber OPO Theory

First, we derive equations of motion for the pulse amplitudes in the cavity. For concreteness, we consider the case of a singly resonant fiber OPO (typical parameters, following Inagaki et al. are given in Table 1). In this system, a narrowband filter ensures that the signal is resonant, while the pump fields , are not. The nonlinearity is provided by the degenerate four-wave mixing process in the nonlinear fiber.

If the OPO network is viewed as a computer, the “memory” is stored in the signal pulse amplitudes , is the pulse index and is the round-trip number, which serves as a discretized time. The “processor” consists of the fiber, a nonlinear map which acts on each pulse independently; and the delay line(s), which create a linear coupling between the pulses. The “inputs” are the amplitudes of the pump pulses , , which can be programmed with an amplitude modulator placed in front of the pump laser.

Each round trip can be modeled as a cascade of three operations: nonlinear gain, coupling, and linear loss. Ignoring vacuum noise, this gives the following map:

 ai(t)\lx@stackrelFiber⟶F[ai(t)]eαLeff/2\lx@stackrelLoss⟶F[ai(t)]√G0\lx@stackrelCoupling⟶∑jCijF[aj(t)]√G0ai(t+1) (2)

Equation (2) relates to , giving us an equation of motion for the OPO network. In the sections below, we obtain the nonlinear gain function and the coupling matrix , that form the core of (2). Once these are known, Ising machines of arbitrary complexity can be simulated.

### 2.1 Nonlinear Fiber

In the highly nonlinear fiber, the term gives rise to self-phase modulation (SPM) cross-phase modulation (XPM), and degenerate four-wave mixing (DFWM). In the limit with a flat-top pulse, SPM and XPM give constant phase shifts and can be cancelled by the appropriate phase matching, leaving only the DFWM term. In this paper we assume that the pulses are sufficiently long that the pulse amplitude is a constant (in time) and dispersion can be neglected; in this case the fields depend only on the distance the fiber, and the fiber field equations are:

 dadz = γa∗bc−12αa (3) dbdz = −12γa2c∗−12αb (4) dcdz = −12γa2b∗−12αc (5)

One can rescale the dependent variables to eliminate the constant ; likewise, one can transform the independent variable to get rid of the linear absorption term. With the field rescaling (, ) and length scaling , the equations simplify to

and are solved on the interval . Gain occurs when satisfy the correct phase relation. Up to a global phase shift, this requires that all be real and positive. Taking constant since , and using the constant of motion from detailed balance, one derives:

The fiber output is . If the total cavity loss is , then the field passes through an additional loss term , giving . We find

 ¯a(1)=¯a(0)eB¯cin[1+(e2B¯cin−1)1−√1−¯a(0)2/2B22]−1 (8)

The strong pump has a fixed amplitude, while the weak pump can be varied. Define as the cavity threshold in the absence of coupling. Linearizing (2.1) in the limit , we find that threshold is achieved when . In terms of the , the fiber input-output relation including both gain and loss is:

 aout=ainG12(√(b2in+a2in/2)/b0−1)0⎡⎢ ⎢⎣1+(G√(b2in+a2in/2)/b00−1)1−√2b2in/(a2in+2b2in)2⎤⎥ ⎥⎦−1 (9)

Equation (2.1) has two limits. When , the terms in the square brackets can be ignored and the field experiences linear gain: . Thus, the (power) gain for the fiber above threshold is , and when this exceeds the cavity loss. On the other hand, when , the exponential inside the square brackets dominates and the field is substantially reduced. This is the DFWM process working in reverse.

From quantum mechanics we know that the field is not defined by a scalar variable but by a state in a harmonic potential. In the truncated Wigner picture this gives rise to vacuum noise in the signal and pump fields. To treat this, we need to add fluctuations to the fields before they are inserted: , , where are complex Gaussians that satisfy , . This is the discrete-time analogue of vacuum noise.

To account for the quantum noise in , it is easiest to assume that the loss happens in a lumped element after the fiber, rather than concurrently with the gain. Near threshold, this is a reasonable approximation; elsewhere the noise is larger by a constant factor. Making use of this assumption, one must add vacuum fluctuations to the signal.

All of these results can be applied to OPOs because the strong pump was presumed constant. Removing it from Eqs. (2.1), one recovers the standard SHG equations, with as the parameter.

For a more realistic treatment of the pulsed OPO, one must abandon the continuous-wave picture in Eqs. (2.1) and treat the pulse shape itself as a dynamical variable. The result is a “multimode” theory of the OPO, where the actual pulse is a weighted sum of normal modes. This is a topic unto itself, which we have treated at length in a separate paper; a key finding is that if the cavity dispersion is large enough, or a sufficiently narrowband filter is inserted in the cavity, only a single normal mode resonates, and multimode effects can be ignored. Although the multimode theory changes the exact expression , this ultimately does not matter. We show in subsequent sections that the performance of the Ising machine depends only on the general form of : the gain at threshold and the near-threshold saturation (which goes as ).

### 2.2 Coupling

This section considers inter-pulse couplings mediated by delay lines and beamsplitters (Fig. 2). Recent experiments all use delay-line couplings, although it poses difficulties when many delay lines are involved. A -bit delay has five parameters: , where , . With fast modulators, in principle one can make all of these parameters (except ) pulse-dependent, giving them an index . Tracing the paths in Figure 2, and including the vacuum that enters through the lower-left beamsplitter, the input-output relation for a single delay is:

 ai→t′itiai+r′iri−deiϕiai−d+(t′iriw(J)i+r′iti−deiϕw(J)i−d) (10)

where the are vacuum processes with . One must be careful to avoid negative indices: for instance maps to .

If the delays are static and , then (2.2) takes the simplified form:

 ai→t2ai+r2eiϕai−d+tr(w(J)i+eiϕw(J)i−d) (11)

This section will focus on the static-delay limit, since the experiments to date use static delays. But the theory and code can accommodate the arbitrary case.

By relabeling the paths so that the long path is the “cavity” path and the short path is the “delay”, and swapping , a delay can be converted into an “advance”, which mixes with (again one must be careful with labeling; corresponds to ). The cavity is enlarged by , so . Thus it is possible to engineer symmetric length- couplings using two identical -bit delays.

A 1-bit delay implements the nearest-neighbor coupling of a 1D Ising chain. To implement a 2D lattice, one needs a 1-bit delay for the horizontal coupling and an -bit delay for the vertical. This gives a lattice with periodic but “offset” boundary conditions, as shown in Figure 2. To implement the lattice without the offsets requires three delays, with time dependence; that case is not treated here.

### 2.3 Linear and Near-Threshold Limits

The fiber OPO has two analytically tractable limits: the linear case and the near-threshold case . These limits arise when we expand the fiber input-output relation (2.1) to third order in :

 aout=ain√G/G0[1−G−(1+logG)8(ain/b)2+O((ain/b)4)] (12)

where

 G=Gb/b00 (13)

The linear limit applies when . Taking only the linear term in (2.3) and combining it with (2.2), one finds (for a single -bit delay):

 ai(t+1)=G12(b/b0−1)0[t2ai+r2ai−d]+(noise terms) (14)

The near-threshold limit applies when . In this case, (2.3) is expanded in powers of :

 aout=ain+[logG02(b/b0−1)−G0−(1+logG0)8(ain/b0)2]ain (15)

Combining this with (2.2) and noting that , we get a difference equation for . Below it is written for a single -bit delay:

 ai(t+1)−ai(t) = [logG02(b/b0−1)−G0−(1+logG0)8(ai(t)/b0)2]ai(t) (16) + r2(ai−d(t)−ai(t))+(noise terms)

Near threshold, the field tends to vary slowly in both position and time. This justifies replacing with a smoothly-varying function and swapping (16) with a PDE. Ignoring the noise terms, it is:

 ∂a∂t+12∂a2∂t2 = [logG02(b/b0−1)−G0−(1+logG0)8a2b20]a (17) − r2d∂a∂x+r2d22∂2a∂x2

Steady-state solutions will drift with a speed . Substituting , one obtains a driftless equation of motion which, upon neglecting higher-order time-derivative terms (), yields:

 ∂a∂t=[logG02(b/b0−1)−G0−(1+logG0)8a2b20]a+r2(1−r2)d22∂2a∂ξ2 (18)

Although less tractable numerically, the steady state of (18) can be found analytically, yielding helpful insights about domain walls as discussed in the next section.

## 3 Collective Dynamics of 1D Chain

For the fiber OPO Ising machine, the evolution of the 1D chain is a two-stage process: in the growth stage, the field is weak compared to the saturation value, pump depletion can be ignored and the signal grows exponentially from the vacuum. Because of inter-pulse coupling, different (Fourier) modes will grow at different rates, the ferromagnetic mode growing fastest. This lasts for a time , which is logarithmic in the saturation power and inversely proportional to the normalized pump amplitude.

In the saturation stage, the field saturates to one of two values: . The sign depends on the sign of the field after the growth stage. Different regions will have different signs, called domains in analogy to the classical ferromagnet, and these domains will be separated by topological defects (domain walls). The domain walls are not fixed, and their mutual attraction can cause some of the smaller domains to annihilate.

### 3.1 Growth Stage

In the growth stage, the field follows Eq. (2.3). Restricting attention to the 1D chain using a single delay line, this becomes:

 ai(t+1)=G12(b/b0−1)0[t2ai+r2ai−1]+(noise terms) (19)

The linear map (19) is diagonalized by going to the Fourier domain . For small , the result is:

 ~ak(t+1) = G12(b/b0−1)0(t2+r2e2πik/N)~ak(t) (20) ≈ G12(b/b0−1)0e−2t2r2(πk/N)2gainG12(b/b0−1)0eik(2πr2/N)drift~ak(t)

The two effects: gain and drift, are separated in Eq. (20). Drift is a result of the unidirectional coupling. For a single delay line, the drift speed is . The gain term depends on , so different modes are amplified at different rates. This amplification stops when the fields reach their saturation value. If is the photon number at saturation and we start from vacuum noise, it takes approximately round trips to reach saturation, that is:

 T=1b/b0−1log(Nsat)log(G0) (21)

Since depends only logarithmically on , which is in fiber OPOs, factors of two or three are not significant, so we can estimate , the pump energy at threshold.

Starting with vacuum and propagating the growth equation (20) time steps, we find that at the end of the growth stage the Fourier modes will be distributed as follows:

 ~ak(T)∼√Nsate−2r2t2T(πk/N)2 (22)

The modes with smaller have larger amplitudes, suggesting that the nearest-neighbor interaction forms some kind of short-range order. A good measure of this is the autocorrelation function . Before saturation, is also a Gaussian:

 R(x)∼⟨aiai+x⟩=∑ke2πikx/N⟨~a∗k~ak⟩∼e−x2/2x20,   x0≡√2Trt (23)

### 3.2 Saturation Stage

In the next stage, pump depletion sets in and the fields inside the OPOs saturate. The simplest way to model this is to assume that the interaction term is negligible at this stage. Under this simple saturation assumption (SSA), the field in each OPO grows independently until it reaches one of two saturation values: . The sign of the initial field is preserved, and all its amplitude information is lost. This can be achieved with a sign function:

 ai(∞)=asatsign[ai(T)] (24)

Rather than collapsing into a single ferromagnetic state, the system forms domains of fixed spin, separated by fixed domain walls. This can be seen in the center plot of Figure 3.

However, Figure 3 also reveals that the domain walls are not necessarily abrupt phase jumps as (24) would have. Depending on the coupling and pump strength, domain walls can be quite wide. Near threshold, the shape admits an analytic solution via (17). Replacing as in Sec. 2.3, a change of variables reduces (17) to the canonical form

 ∂¯a∂¯t = (1−¯a2)¯a+12∂2¯a∂¯x2 (26) (¯a=a−10a,   ¯x=(x−vt)/ℓ,   ¯t=t/τ)

where

 a0=2b0√(b/b0−1)logG0G0−(1+logG0),   ℓ=√2r2(1−r2)(b/b0−1)logG0,   v=r2,   τ=2(b/b0−1)logG0 (27)

are the saturation field, domain wall length, drift speed, and relaxation time, respectively. Equation (26) has an analytic solution: . This is the domain wall.

The left plot of Figure 4 zooms in on a domain wall. As the pump grows, the wall gets sharper, its width decreasing as given in (27). If the pump is very strong or the coupling is weak, and the smoothly-varying field assumption behind (17) breaks down. However, it seems to hold quite well for the values chosen here (the solid lines in the figure are the solution).

Domain walls are dynamic objects. In the presence of a perturbation, they move. Performing perturbation theory about the solution, one finds that the Hessian is singular: most of its eigenvalues are or larger, but for the vector , it is zero. While other perturbations are strongly confined, perturbations along the direction are unimpeded. These correspond to moving the domain wall left or right. We can deduce the domain-wall velocity by taking the inner product (the eigenvalues are orthogonal):

 ¯vw=−[∫∂¯a∂¯x∂¯a∂¯xd¯x]−1∫∂¯a∂¯t∂¯a∂¯xd¯x=−34∫sech2(¯x−¯xw)∂¯a∂¯td¯x (28)

Consider a function with two domain walls at . The precise way they are “glued together” at only matters to second order in the perturbation theory; is a valid solution. Applying (28), one finds the following domain-wall speed and collision time:

 ¯vw=32sech4(¯L/2),   ¯T¯L=148e2¯L (29)

As the domain walls move, smaller domains will evaporate while large domains remain unaffected. All the domains that survive after a time have a size .

The right plot in Fig. 4 shows the formation of domain walls as a color plot in both the pulse index and time . The domain drift is obvious here. In addition, the average domain size clearly shrinks the further the system is from threshold. Looking closely, one also sees events where domain walls collide and annihilate some of the smaller domains – but in general this is rare, because the domains that form by time tend to be moderate in size, and the lifetime (29) can be quite long.

## 4 Final-State Statistics

From the linear- and saturation-stage theory from Section 3, we can calculate statistical properties of the final-state () system. These properties are of interest because can be used to benchmark the performance of different Ising machines, or to compare the Ising machine against other optimizers. In this section, we compute the autocorrelation function, defect density, success probability and domain-length histogram for the 1D Ising machine. These are measurable quantities, allowing for a direct comparison between theory and experiment.

### 4.1 Autocorrelation Function

The autocorrelation function, given by , is a key quantity in statistical mechanics. For the thermal Ising model with , it falls off exponentially with distance in one dimension, .

Since the Ising machine is not in thermal equilibrium, we do not expect a priori that will be exponential. Indeed, at the end of the growth stage, Eq. (23) shows that is a Gaussian. The easiest way to compute as is to assume the simple saturation approximation (24). Replacing , the autocorrelation at is found to be:

 R(x;∞)=1−2P(ai(T)ai+x(T)<0) (30)

where is the saturation time. Since the evolution in is approximately linear, the probability distribution of is a two-dimensional Gaussian. Its covariance is related to the autocorrelation at time , :

 σi,i+x=[1e−x2/2x20e−x2/2x201] (31)

Following (30), the autocorrelation may be expressed as an integral over a Gaussian with linear constraints:

where is the upper-left quadrant in . To solve this, perform a linear transformation that diagonalizes the quadratic form in the exponent; is deformed to a pie slice, and the resulting integral is proportional to its angle. The autocorrelation becomes:

 R(x)=1−4πtan−1√tanh(x2/4x20) (33)

To compute from the experimental data, one must first reconstruct the pulse amplitudes from the measurement record. In Inagaki et al., no local oscillator is present, so the signal is passed through a Mach-Zehnder with a delay line, measuring the quantities , . If the pulse energy is the same for each pulse, the angle between neighboring pulses is given by . A negative value of indicates a phase flip. This is plotted in the upper-left panel of Fig. 5. Taking to be real for the degenerate OPO, we can invert the relation between the and the to reconstruct the original amplitude sequence . It is then straightforward to compute the autocorrelation function and the correlation length.

The right plot of Fig. 5 shows the autocorrelation length as a function of pump amplitude, obtained by fitting experimental data to (33). The experimental agree with Eq. (23), with a particular fit for shown in the inset.

Although, Eq. (33) looks like an exponential to the unaided eye, plotting them on top of each other, the former is a much better fit to the experimental data, as shown in the inset plot. However, it turns out that the best exponential fit to (33) is , with . Thus, we can obtain from experimental data by fitting the autocorrelation to an exponential. The right plot in 5 shows this for a variety of pump powers. The agreement with experimental data is reasonably good.

### 4.2 Defect Density

Another key statistic is the defect (domain wall) density. This is the average number of domain walls divided by the size of the chain . The average domain length is then . For a thermal Ising model with , one has .

Since has fixed amplitude, one can compute from the autocorrelation function: . For , may be linearized about , giving the result:

 nd=1πx0,   Ld=πx0,   x0=√2Trt (34)

Figure 6 (left) compares experimental data from Inagaki et al. (Fig. 3) to both Eq. (34) and numerical simulations. The data match the simulations when , but deviate from Eq. (34). This suggests that the full numerical model works well, but Eq. (34), which relies on the simple saturation assumption (24), is inaccurate. This is the result of domain-wall motion and collision in the saturation stage, which reduces the number of defects as .

### 4.3 Domain Length Histograms

Experimental data for the domain-length distribution is plotted in Fig. 6 (center). There is a reasonable fit between the data and numerical simulations as . Note, however, that the calculated histogram at differs from that at . This difference reflects the domain-wall dynamics in the saturation phase. In particular, since small domains evaporate faster than large domains, the population of small domains is depleted, and the average domain length grows. Since , an increase in domain length results in a decrease in defect density, giving rise to the difference between the and lines in the left plot.

In a thermal Ising model, the probability distribution of spin depends only on its nearest neighbors; mathematically this makes it a Markov chain in . Thus, the distribution should be exponential in : . The histograms in Fig. 6 have exponential tails, but are clearly not exponential for near zero. This means that the Ising machine never reaches thermal equilibrium, even when . Rather, it “freezes out” fluctuations accumulated during the linear growth stage, through a highly nonlinear process involving domain wall motion and collisions. Only if one waits an exponentially long time will the larger domains evaporate, bringing the machine to the ground state.

### 4.4 Success Probability

The success probability of the Ising machine is defined as the probability that it reaches the ground state at some time . The chosen depends on experimental parameters, should be large compared to the saturation time , but not exponentially large (since this would always give the ground state). Figure 6 (right) plots the success probability (numerically computed for ) as a function of system size and pump power. As expected, the probability is greatest near threshold for small systems, where the average defect number is small.

If we assume the final state is thermal, the success probability can be calculated analytically. For an -spin ring with and periodic boundary conditions, the partition function is:

 Z=∑n(1+(−1)n)e−nβJ=(1+e−βJ)N+(1−e−βJ)N (35)

The success probability is the ground-state probability for the system. The ground state has energy zero and degeneracy 2, so . For low defect densities, , and the success probability becomes:

 Ps=sech(Ne−βJ) (36)

Note that is the approximate defect density (for ). Thus, in analogy to the thermal model, we suspect that the success probability of the 1D Ising machine should depend on the defect density as well. Using the relation , in the inset figure, is plotted against for all values shown in the larger plot. Like the thermal model, the full Ising machine success probability falls off exponentially for high , fitting reasonably well to the form .

## 5 2D and Frustrated Systems

### 5.1 2D Square Lattice

The two-dimensional Ising lattice exhibits richer physics than its 1D counterpart. In particular, the thermal 2D system has a phase transition at finite temperature with long-range order below the transition temperature. Likewise, we suspect that a mechanism must exist to ensure long-range order in the 2D Ising machine.

Referring back to Figure 2, an Ising lattice can be realized in an OPO network using a 1-bit and -bit delay. This implements the couplings , . If the spins are serialized in C order , then periodic boundary conditions ; are enforced. There is a slight vertical offset compared to standard periodic boundary conditions (see Fig. 2) but in the limit with ferromagnetic couplings, this offset is negligible.

As before, the dynamics are described by a growth stage and a saturation stage. In the growth stage, the Fourier modes are amplified independently, in analogy to Eq. (22) we have:

 ~ak(T)∼√Nsate−2r2t2Tπ2(k2x+k2y)/N2 (37)

This gives the same autocorrelation function, generalized to two dimensions: , with . Here is the saturation time; see Sec. 3.1.

Growth-stage fluctuations are imprinted on the domain structure of the OPO, and persist for some time. Since these fluctuations are longer-range the larger the saturation time , the Ising machine displays longer-range order when the pump is closer to threshold, just like the 1D case. Figure 7 shows the state of the machine for five different pump powers . The larger , the smaller the domains that form.

After saturation, we can proceed analytically as long as the pump is near threshold. Invoking the limit (2.3) and inserting both horizontal and vertical delays to obtain the two-dimensional analog of (17):

 ai,j(t+1)−ai,j(t) = [logG02(b/b0−1)−G0−(1+logG0)8(ai,j(t)/b0)2]ai,j(t) (38) +[r4ai−1,j−1(t)+r2t2(ai−1,j(t)+ai,j−1(t))−t4ai,j(t)]

Near threshold, the field tends to vary slowly in both position and time. Following the same procedures used to obtain (18), replaces the discrete increments with derivatives and drops higher-order terms, obtaining:

 ∂a∂t=[logG02(b/b0−1)−G0−(1+logG0)8a2b20]a+r2(1−r2)2[∂2a∂ξ2x+∂2a∂ξ2y] (39)

where , are the comoving coordinates. Setting , , , , Eq. (39) is converted to its canonical form (the 2D version of (26))

 ∂¯a∂¯t=(1−¯a2)¯a+12(∂2¯a∂¯x2+∂2¯a∂¯y2) (40)

with , , and given in (27). The steady-state solutions to (65), , are linear domain walls.

Curved domain walls will move towards the center of curvature at a rate proportional to . This can be seen intuitively if we imagine each spin on the wall picking a sign based on a majority vote of its neighbors. The rate can be computed by considering the special case of a circular domain. Working in cylindrical coordinates, (65) becomes:

 ∂¯a∂¯t=(1−¯a2)¯a+12∂2¯a∂¯r2+12¯r∂¯a∂¯r (41)

Here, is a perturbation to the 1D equation (26). Applying the same results used to compute the attraction of neighboring walls (Eq. (28)), the drift velocity is

 ¯vw=−12¯r (42)

For a circular domain of size , this gives