Monte Carlo simulations of random non-commutative geometries

Monte Carlo simulations of random non-commutative geometries

John W. Barrett, Lisa Glaser

School of Mathematical Sciences
University of Nottingham
University Park
Nottingham NG7 2RD, UK

john.barrett@nottingham.ac.uk, lisa.glaser@nottingham.ac.uk

May 11th, 2016
May 11th, 2016
Abstract

Random non-commutative geometries are introduced by integrating over the space of Dirac operators that form a spectral triple with a fixed algebra and Hilbert space. The cases with the simplest types of Clifford algebra are investigated using Monte Carlo simulations to compute the integrals. Various qualitatively different types of behaviour of these random Dirac operators are exhibited. Some features are explained in terms of the theory of random matrices but other phenomena remain mysterious. Some of the models with a quartic action of symmetry-breaking type display a phase transition. Close to the phase transition the spectrum of a typical Dirac operator shows manifold-like behaviour for the eigenvalues below a cut-off scale.

1 Introduction

A spectral triple is a way of encoding a geometry using a Dirac operator [10]. There is a Dirac operator acting on a Hilbert space and an algebra that acts on the same space. Examples with a commutative algebra are given by Riemannian manifolds, where the algebra is the algebra of functions on the manifold and the Dirac operator is the usual one acting on spinor fields. However, the point of spectral triples is that the algebra is allowed to be non-commutative, leading to a generalisation of the notion of geometry.

A random geometry is a class of geometries that fluctuates according to a probability measure. In this article, the probability measure is taken to be a constant times

(1)

using a real-valued ‘action’ and a standard measure on the space of Dirac operators.

To make this computable, the class of geometries is taken to be the Dirac operators on a fixed finite-dimensional Hilbert space ; thus is a space of matrices. It turns out that the axioms for for these finite spectral triples are all linear and so is a vector space [5]. Therefore one can take to be its Lebesgue measure, which is unique up to an overall constant. Thus the object of study is a random matrix model where the matrices are constrained to be Dirac operators.

The algebra in this construction is also fixed, and is taken to be the algebra of matrices, . Spectral triples with this algebra are known as fuzzy spaces [19] and are the simplest type of non-commutative spectral triple. Allowing the algebra to be non-commutative is important because it allows a new type of finite-dimensional approximation to a manifold. Staying within the realm of commutative algebras would lead to the algebra of functions on a finite set of points, which is a lattice approximation to a manifold; simple examples of such random commutative spectral triples are studied in [18, 12]. The fuzzy spaces are not lattice approximations and so the study of these is complementary to the study of random lattices. Intuitively one can think of the algebra as consisting of the functions on a space with a certain minimum wavelength that is determined by ; this picture is known to be accurate for the most-studied example of the fuzzy two-sphere [13].

The purpose of this paper is to study the simplest examples of random fuzzy spaces by computing the statistics of the eigenvalues of using a Markov Chain Monte Carlo algorithm. The examples are determined by the value of and the type of gamma matrices used in the Dirac operator (explicit formulas are given in section 2.1). A type geometry is one in which there are gamma matrices that square to and gamma matrices that square to . The examples studied here are types , , , , and . It would be interesting to go to higher types but these types already exhibit an interesting set of different behaviours, some aspects of which are not yet understood from a theoretical point of view.

The form of the action has not yet been specified. In this paper it is assumed to be spectral, which means it is of the form

(2)

for some potential function , with for some and denoting the eigenvalues of the self-adjoint operator . The Connes-Chamseddine spectral action [7, 8], in which as , is not suitable because the integral for the partition function

(3)

does not converge. This is because , for , converges to a finite constant as .

In fact, it is necessary that instead. The simplest cases are investigated here, namely

(4)

with , or and . By a simple change of variables in the integral, , one can assume that either , or and , so one need only study these cases. Note that one could choose other spectral actions and it is possible that the results obtained here might motivate the study of other choices.

When and the potential has a symmetry-breaking double well form. In random matrix models it is known that this potential leads to a phase transition [9] and so this possibility is investigated here. It is shown here that, at least in some of the random Dirac operator models, there is good numerical evidence for the existence of a phase transition.

The eigenvalue distribution is plotted for several values of the coupling constants and matrix sizes to exhibit the typical behaviours. On a (commutative) Riemannian manifold of dimension the eigenvalue distribution, or density of states, for the Dirac operator is approximately the density of flat space

(5)

when the eigenvalues are large enough. Most of the plots for random non-commutative geometries presented here look nothing like this, except that some of them are approximately constant () for some range of eigenvalues.

The exception is close to the phase transition, where the distribution does indeed look very much like (5) for a range of eigenvalues below a high-energy cutoff. Thus as far as the eigenvalues are concerned, the random non-commutative geometries are behaving something like random Riemannian geometries in this regime. The results presented here show that this is a promising area for future investigation. A phase transition to a geometric phase in a multi-matrix model with a rather different Yang-Mills-type action has also been observed in [3, 23].

The motivation for the present work comes from the close relation between random geometries and models of quantum gravity, though one does not need to know anything about quantum gravity to understand the results. Most approaches to random geometry have been stimulated by work on quantum gravity but some of them (e.g. dynamical triangulations or Liouville gravity) have found wider application. It is possible that random Dirac operators will also find other applications beside quantum gravity. In quantum gravity the maximum eigenvalue has a ready interpretation as a natural cutoff to gravitational phenomena at the Planck scale. However in a wider context it can be interpreted simply as a finite limit to the resolution to which a geometry is defined.

There are features in common with other models of quantum or random geometry, most notably the existence of a phase transition, which is also evident in dynamical triangulations [2] and lattice simulations [15]. There are however some features of our system that are quite different from those of other models. One point is that the requirement that the action has to have a compact global minimum in the non-compact is a non-trivial constraint on the model. In other theories of discrete geometry, like causal dynamical triangulations [1] or causal set theory [17], the space of geometries explored in Monte Carlo is a combinatorial space of a finite number of elements and the code is guaranteed to reach equilibrium after a finite, although possibly long, time.

Another interesting feature is the freedom to rescale and using the change of variables mentioned above. Monte Carlo simulations show that the rescaling does not change the qualitative features or our system, e.g. relative differences between eigenvalues will remain unchanged. This can be explained by the use of the Lebesgue measure, which does not distinguish any particular scale of energy. This is in marked contrast to systems such as the Ising model, or the causal set model of 2d orders [25].

The non-commutativity also distinguishes this approach from most others. Using a finite-dimensional commutative algebra necessarily leads to a lattice model of quantum geometry defined on a finite set of points. The use of non-commutative geometry allows a more general set of finite-dimensional models where the algebra is an algebra of matrices. Thus one can construct perfectly computable models of random geometry that are not lattice models. Moreover, the standard model of particle physics has a non-commutative geometry using exactly the same framework [4, 11], so the hope is it will be easy to combine the two into a unified model of gravity and particle physics.

The technical details of the Dirac operators, observable functions and Monte Carlo method are given in section 2. The results for the action are given in section 3, where it is explained how the results relate to the standard theory of Gaussian random matrices. Actions including a term are studied in section 4, with particular attention paid to the symmetry-breaking case which exhibits a phase transition. Section 5 discusses the interpretation of the results. The expansions of the action in terms of the constituent matrices of the Dirac operators are given in detail in appendix A.

2 Technical details

2.1 The Dirac operators

The spectral triples considered here are ‘real spectral triples’, which consist of a finite-dimensional Hilbert space together with some operators acting in . These are an algebra , a chirality operator , an antilinear ‘real structure’ and a self-adjoint Dirac operator . For a given random geometry model and are fixed but is allowed to vary, subject to the axioms of non-commutative geometry.

The axioms are solved in [5] to give explicit forms of the Dirac operator in terms of Hermitian matrices and anti-Hermitian traceless matrices according to the formulas below. There are no other constraints on these matrices, so these are the freely-specifiable data for the Dirac operator.

The Dirac operator acts on , with the space on which the gamma matrices act. The gamma matrices are assumed to form an irreducible representation of the Clifford algebra, which implies that the chirality operator is trivial for odd. The dimension of is for even and for odd. In the first two cases the sole gamma matrix is just or respectively. In the remaining cases the gamma matrices are matrices, distinct gamma matrices anti-commuting. As usual, denotes the commutator and the anti-commutator of matrices.

Type (1,0)
(6)
Type (0,1)
(7)
Type (2,0)

.

(8)
Type (1,1)

.

(9)
Type (0,2)

.

(10)
Type (0,3)

.

(11)

A type geometry has a signature which determines some of the characteristics of the spectrum of . These properties are well-known, holding also for the case of a Riemannian geometry in dimension , which is a type spectral triple with signature . The properties can be seen in the Monte Carlo simulations below.

Symmetry

For , if is an eigenvalue then so is .

Doubling

For , each eigenvalue appears with an even multiplicity.

The proof of these is given briefly here. For even , the chirality operator is non-trivial. It is Hermitian and has eigenvalues . The Dirac operator changes the chirality, . If is an eigenvector of eigenvalue then is an eigenvector with eigenvalue . As a result, the spectrum of the Dirac operator is symmetric around . A similar argument holds for using the fact that so that and have opposite eigenvalues.

For the doubling property, if then (it is ‘quaternionic’). Since in these cases, if is an eigenvector then so is with the same eigenvalue. Moreover, one can check that and must be linearly independent: suppose the eigenvectors are proportional to each other, i.e., , with , then

(12)

which is a contradiction.

2.2 A Monte Carlo algorithm for matrix geometries

An observable is a real- or complex-valued function of Dirac operators. The expectation value of is defined to be

(13)

The integral can be approximated as a sum over a discrete ensemble .

(14)

so that in the limit taking , the average obtained through this discrete sum converges towards the continuum value . This convergence can be improved by using a Markov Chain Monte Carlo algorithm. In such an algorithm the Dirac operators are generated with a probability distribution such that

(15)

This simplifies the expression for the average

(16)

and improves convergence by concentrating the sampling on regions which contribute strongly. To generate such an ensemble of Dirac operators the Metropolis-Hastings algorithm is used [16]. In this algorithm a proposed is generated from by a move which will be defined in the next subsection. The proposed operator will be accepted as a new part of the chain, , if . If this was the only rule to add new operators to the Markov chain the code would terminate in any sufficiently deep local minimum. To make it possible to escape local minima, the new operator is also accepted if , with a uniformly distributed random number in . If is rejected in both tests then . This algorithm ensures a Markov Chain satisfying detailed balance, which ensures that the transition probability converges [22].

After a sufficient number of moves, the probability distribution for converges towards the desired configuration and becomes independent of the initial state . The states generated before this convergence are not representative of the probability distribution and can not be used to measure observables. We checked that this burn-in process terminated by starting from different initial configurations and monitoring the convergence of the action.

The code is implemented in C++ and all matrix algebra operations use the open source software library Eigen [14].

2.3 The Monte Carlo move

To construct a Markov Chain on the space of Dirac operators , a move that proposes a new Dirac operator based on the last Dirac operator is needed. The Markov Chain property requires that the next proposed operator can only depend on the current operator . The space is a vector space, so a simple additive move

(17)

with a Dirac operator, will always be ergodic, and as long as does not depend on past states the Markov property is also satisfied. As shown in section 2.1 the Dirac operator is defined using a choice of Hermitian matrices and anti-Hermitian matrices . To construct we define it as a Dirac operator composed from . Generate a random matrix with matrix elements in the complex range and define

where is a real constant that is determined at the start of each simulation. The value of determines how ‘long’ the steps in the configuration space are. A Monte Carlo algorithm has the best thermalisation properties if the acceptance rate of proposed moves is (where counts all generated). At the beginning of a simulation the acceptance rate is tested and adjusted, larger if the acceptance rate is too large, smaller if the acceptance rate is too small, such that is satisfied within a tolerance of . The number of attempted Monte Carlo moves is called the Monte Carlo time .

Note that the move for the does not preserve the condition that it is trace-free. However since the appear only in commutators, the trace decouples and its value does not affect the Dirac operator.

2.4 Calculating the action

The expression for the Dirac operator contains terms and for . The commutators and anti-commutators require the use of the left and right actions of . These are written as matrices using the tensor product

(18)
(19)

The Dirac operator can then be written as a matrix that acts on the tensor space .

The matrix operations needed in the computer code are matrix multiplication, addition and calculation of eigenvalues. The run time of these grows like , and respectively for matrices. Therefore it makes sense to write the action in terms of the much smaller matrices , to accelerate the simulations. The details of this calculation for the geometries investigated are collected in appendix A.

2.5 Observables

Given a Dirac operator , the eigenvalues can be computed. The two main observables of interest are the -th eigenvalue, ordering the eigenvalues from lowest to highest

(20)

and the distribution of eigenvalues at eigenvalue

(21)

Since eigenvalue calculations are computationally expensive, the eigenvalues are only measured every attempted Monte Carlo moves. This improves run time, and reduces the correlation of the measurements. The action and the acceptance rate of moves are recorded at every step to monitor the algorithm.

At later points it will become useful to measure some additional observables that are computed from the eigenvalues, for example, . For certain cases it has also proven instructive to examine the non-physical degrees of freedom of the matrices and via their eigenvalues.

The average of any observable can be calculated directly from the set of measurements. However to estimate the statistical error on our measurements it is necessary to take the correlation between successive states in the Markov Chain into account. The error bars shown on plots of average eigenvalues show the statistical error calculated as

(22)

with the variance of the eigenvalue, the integrated autocorrelation time of the eigenvalue and the number of measurements performed [22].

In figure 1, autocorrelations for the simulations of a type geometry with for size and are shown. The figures show the autocorrelation for both the action and the smallest eigenvalue of .

(a) Autocorrelation of for
(b) Autocorrelation of for
Figure 1: Fall-off of the autocorrelation for the action and the minimum eigenvalue for a type geometry with . The blue line is and the yellow line is . The horizontal axis is Monte Carlo time.

The autocorrelation time is determined on the data after the burn-in is completed. In practice the burn-in phase was combined with the adjustment of . Then was counted from moves after the last adjustment to .

This burn-in and adjustment period takes up most of the simulations. We found that for the eigenvalue distribution and the average eigenvalues, measurements (corresponding to attempted Monte Carlo moves) lead to very good results. To determine the phase transition, measurements were used to ensure that statistical fluctuations were not mistaken for a phase transition.

3 Results for action

In this section the Monte Carlo simulations for the simplest possible action are examined. The one-dimensional Clifford algebras, type and are examined first and the results understood using the standard theory of Gaussian matrix models. After this, some numerical results for the two- and three-dimensional types are shown.

3.1 The simplest cases: type and

The eigenvalues of Dirac operators (6), (7) can be written in terms of the eigenvalues of the matrix or the eigenvalues of . For the case one has eigenvalues

(23)

while for the case

(24)

This follows from the fact that eigenvectors of are of the form , with the eigenvectors of or .

The first point is that the case has eigenvalue with multiplicity given by the terms . This can also be seen directly from the Dirac operator: all matrices in that commute with have eigenvalue , and there are always at least linearly independent such matrices. It will be seen later that a peak in the eigenvalue distribution at, or near, is a feature of some other random fuzzy spaces.

The second point is that the spectrum of the case is symmetric about the origin, as . This is in accordance with its signature , which means that each Dirac operator has symmetric spectrum. The spectrum of a Dirac operator is typically not symmetric since in this case. This means that our simulation gives an eigenvalue distribution that is not exactly symmetric, though it will eventually converge to a symmetric distribution as the Monte Carlo time increases.

For the case, using the simplified action (47) one can transform the integral over the Dirac operator into an integral over the Hermitian matrix .

(25)
(26)

The case is similar, but one has to take into account the fact that the integration over Dirac operators is an integration over traceless matrices . Using (51) gives

(27)
(28)
(a) Type average eigenvalues of
(b) Type average eigenvalues of
(c) Type average eigenvalues of
(d) Type average eigenvalues of
Figure 2: Average ordered eigenvalues for and for the cases and with .

An example of average ordered eigenvalues generated by the Monte Carlo simulation is shown in figure 2.

These random matrix models are close to the Gaussian Hermitian matrix model [20, 6], which has the similar action

(29)

with integration over all Hermitian matrices.

A standard technique in random matrix models is to calculate the joint probability density for the eigenvalues . The formula is [21]

(30)

The terms with the differences of eigenvalues result from the Jacobian for the change of variables from the matrix elements to the eigenvalues. Since this term is small when two eigenvalues are close, this results in the phenomenon of the repulsion of eigenvalues.

The matrix can be split into traceless and trace parts, and these are statistically independent. It follows that expectation values in the model can be calculated as expectation values of observables in the Gaussian Hermitian matrix model that are independent of the trace of . This is done by writing . The action in the model transforms directly to the Gaussian Hermitian matrix model by rescaling the trace by a factor . The transformation is

(31)

A standard result (the Wigner semicircle law [26]) is that the analogue of the eigenvalue distribution (21) for the Gaussian Hermitian matrix model converges as to the density of states

(32)

with .

(a) Type
(b) Type
(c) Type
(d) Type
(e) Type
(f) Type
Figure 3: The semicircle law is compared with the density of states for or .

In our simulations using actions and we find that the semicircle law is also a good approximation for the eigenvalues of and . It is already well-satisfied for and improves at higher , as shown in figure 3. The reason for this is that in the Gaussian Hermitian matrix model, the variable is normally-distributed with variance , and so adjusting the eigenvalues with a fixed multiple of this makes no difference to the density of states in the limit .

Another standard result from random matrix theory is that the correlation between different fixed eigenvalues of vanishes as . Thus for large , the joint probability distribution away from the diagonal is simply the product of the density of states [24]. Therefore, for large , one can calculate the eigenvalue distribution of the Dirac operator from the semicircle law as a convolution, with a correction for the behaviour of the correlations on the diagonal.

This is shown as follows. Let be an observable for a random fuzzy space, with . Then assuming a product probability distribution, one has

(33)

with density of states for the Dirac operator the convolution

(34)

which is an elliptic integral. This integral is the same for type and , since . The Monte Carlo simulation of the eigenvalue density at finite is shown in figure 4. The continuous line is the curve for the case but a significant correction term is added to for the case.

The correction to the product probability density gives a contribution only near the diagonal . The approximate form is an additional contribution to of [24]

(35)

For the case , this formula contributes significantly near , accounting for the gap at the origin in figure 4(b) with a width that scales as .

(a)
(b)
Figure 4: The eigenvalue density for the Dirac operator compared with the convolution of two semicircle functions , with correction applied in the case.

3.2 Higher types

While the spectra for geometries with one-dimensional Clifford algebra are easy to understand, those with a two-dimensional Clifford algebra are less straightforward. The average eigenvalues and the eigenvalue distributions are shown in figure 5 for the case and in figure 6 for the larger matrices . The individual eigenvalues are more easily seen in figure 5. All three types are symmetric about the origin and the third one exhibits eigenvalue doubling, all in accordance with the properties for , and derived in section (2.1).

The action is times the sum of quadratic actions for each matrix or , these quadratic actions being exactly the and actions previously analysed. In particular, these matrices are statistically independent. The eigenvalues of the , are still approximated well by the semicircle law (32) with . However, the main difference in analysing the two-dimensional cases is that the eigenvalues of the Dirac operator are not simply related to the eigenvalues of , .

(a) Type
(b) Type
(c) Type
(d) Type
(e) Type
(f) Type
Figure 5: The average eigenvalues, and the histograms of the eigenvalue distribution for the different types of two-dimensional Clifford algebra. The action is and the matrix size .
(a) Type
(b) Type
(c) Type
(d) Type
(e) Type
(f) Type
Figure 6: The average eigenvalues, and the histograms of the eigenvalue distribution for the different types of two-dimensional Clifford algebra. The action is and the matrix size .

For the case the multiplicity of eigenvalue is at least , as shown by examining the Dirac operator

(36)
(37)
(38)

using a basis so that . The Dirac operator acts on the space . All in this space for which

(39)
(40)

have eigenvalue . Picking a basis on one can choose and . There will then be linearly independent matrices that commute with and that commute with , hence eigenvalues equal to for the type geometry. Just as in the case, there is a gap in the eigenvalue spectrum around the spike at . This shows there is eigenvalue repulsion for this Dirac operator also, though we do not have a theoretical understanding of this phenomenon.

The types and also have a feature at the origin. The density of eigenvalues is sharply lower in a narrow dip at the origin and there is a somewhat wider upward spike around this. This is shown for the case in figure 7, which zooms in on a region around eigenvalue .

(a) Average eigenvalues
(b) Eigenvalue distribution
Figure 7: Zooming in to a region near eigenvalue . Type , .

The gap in the middle is further evidence of eigenvalue repulsion, this time between eigenvalue and the opposite eigenvalue that is required by the symmetry of the spectrum of about .

(a) Type
(b) Type
(c) Type
(d) Type
(e) Type
(f) Type
Figure 8: The distribution of single eigenvalues at different sizes.

The numerical results also indicate that the range of the eigenvalues remains unchanged under the change of matrix size, and the distribution becomes smoother, appearing to converge to a smooth limiting distribution in the same way as for random matrices. Another similar feature is that for larger the fluctuation of each individual eigenvalue becomes smaller. This can be seen in figure 8. The leftmost bump in each plot is the smallest eigenvalue while the rightmost bump is the largest, the eigenvalues in between were chosen to be symmetric, and include the central most eigenvalues.

(a) Type
(b) Type
Figure 9: The average eigenvalues and the eigenvalue distribution for type . The action is and the matrix size .

The eigenvalue distribution for the type case is plotted in figure 9. This appears to be smooth at the origin, similar to the case. The common property of these cases is that the Dirac operators do not have a symmetric spectrum. Thus a small eigenvalue does not have to be close to any other eigenvalue. There is nothing special about the origin, and in particular, the eigenvalue repulsion hypothesis does not lead to any special behaviour here.

4 Results for actions with term

The term in the action leads to interactions between the that compose the Dirac operator. An extreme case of this is for type , in which a four point interaction of all four matrices , , and is present. These terms make it harder to understand the system analytically, however for the simulations they are no obstacle.

The simple action leads to behavior very similar to that for the action . This is shown in figure 10. Some characteristics, like the shoulders for type and are more pronounced, but the overall shape is quite similar.

(a)
(b)
(c)
(d)
(e)
(f)
Figure 10: The eigenvalue distribution for the action .

Combining the two terms together gives the action

(41)

For positive values of the behaviour of the numerical simulations is somewhere between the case and the and does not show qualitatively new features. However when is negative this is a symmetry-breaking potential with two minima, shown in figure 11.

Figure 11: The potential for , , , , , , , , . The lines are coloured from red () through to yellow ().

The question of how the eigenvalues behave in this case is interesting and a variety of behaviours is exhibited depending on the type of the gamma matrices. This is shown in figure 12.

(a) Type
(b) Type
(c) Type
(d) Type
(e) Type
(f) Type
Figure 12: The eigenvalues of for and , , , , , , , , . The lines are coloured from red () through to yellow ().

The different types are described here for values of decreasing from .

Two peaks start to form at around then grow and separate sharply between and , leaving the centre of the distribution empty. Since the Dirac operator is not symmetric, the Monte Carlo simulation can and does settle in just one of the peaks, though one expects that the Markov Chain would eventually explore both peaks equally given a long enough run.

Two peaks form at around and grow slowly and steadily. The central part of the distribution remains. One can understand this from the fact that the eigenvalues of settle into two peaks, and since it is traceless, the favoured configuration has the same number of eigenvalues in each peak. The differences between eigenvalues of in the same minimum remains small, giving the central peak in the distribution for . The eigenvalues exactly are also still present.

Two peaks develop at small and grow until the central part of the distribution vanishes suddenly between and .

Two peaks develop at small and grow until the central part of the distribution vanishes suddenly between and .

This is the most mysterious case. Two slight peaks develop but the eigenvalues do not separate into two peaks for the whole of the range of tested. Instead some further substructure to the eigenvalue distribution develops.

This is similar to the case, with the sharp change occurring between and . In figure 11(f) the Markov Chain for has to a certain degree explored both minima.

(a) Type
(b) Type
(c) Type
(d) Type
(e) Type
(f) Type
Figure 13: The mean of order parameter and the autocorrelation time as varies.

These descriptions can be compared to the plots of the order parameter

(42)

and the autocorrelation time, which is usually expected to increase near a continuous phase transition due to the long-range order (‘critical slowing down’). These are plotted in figure 13. One can see that there is good evidence for a phase transition for the types , , and . In these plots, the order parameter changes gradient at around the values of described above, and the autocorrelation time of has a peak around this value also. It is difficult to see any clear signal from the plots for types and .

(a) Type
(b) Type
(c) Type
(d) Type
Figure 14: Fraction measuring the square of the proportion of that is in the trace part of as varies. The plot for type shows the fraction for (red), (green) and combining both (blue).

It is remarkable that the types for which the evidence for a phase transition is clearest are the ones where the Dirac operator contains an anti-commutator with a Hermitian matrix . Unlike the matrices, the trace of appears to play a crucial role. The observable

(43)

measures the strength of , calculated as a square so that positive and negative values do not cancel, and as a fraction of the total strength of . The averages of this are plotted in figure 14. In the case of , there are two matrices and , and the for both combined is

(44)

In both cases, if the matrices are pure trace. The plots show that develops a large expectation value at the phase transition. In the case of the Monte Carlo data used for the plots developed a preference for rather than but this is of no significance due to the rotational symmetry between the two111We have checked this with additional simulations and found that the trace degree of freedom is in general distributed randomly between the two matrices.. The sum of squares is the correct rotationally-invariant observable.

5 Conclusion

A model of random geometry has been presented here as random Dirac operators in non-commutative geometry. The integrals can be interpreted as multi-matrix models but with a new type of observables, namely the eigenvalues of the Dirac operator. The one-dimensional cases can be understood using theoretical results from random matrices but the higher-dimensional types are not so easy and will require further study to obtain analytic results. Numerical results have been presented showing various phenomena that depend strongly on the type of gamma matrices used, particularly whether the spectrum of a Dirac operator for that signature is symmetric about the origin.

From the numerical results it is clear that some features are similar to the properties of the eigenvalues of random matrices: the eigenvalue distributions appear to converge in the large limit and the dispersion of individual ordered eigenvalues decreases; also there is some evidence of a degree of eigenvalue repulsion at the origin.

The most interesting results are for the quartic action with negative , so that the potential is of symmetry-breaking type. For some types, the eigenvalue spectrum changes suddenly when reaches a critical negative value and the observable is a good order parameter for this transition. This is taken as a strong indication that a sharp phase transition would occur in the large limit. The types where this transition is clear are those where the Dirac operator contains at least one term involving an anti-commutator with a random hermitian matrix . Then the trace of develops a large expectation value, becoming the dominant contribution to after the transition. This can’t happen with commutator terms as the trace of the random matrix decouples in this instance.

For generic , the eigenvalue distribution of is not a good approximation to the behaviour for the Dirac operator on any fixed (commutative) Riemannian manifold (5), except that one could possibly argue that the distribution is approximately constant for some ranges (e.g. figure 10(f), which looks like a one-dimensional manifold). The exception to this is near the phase transition, where the curves in figure 12 do appear to have the right power-law behaviour. Two of these distributions are highlighted in figure 15, showing the types and at values of just below the value for the phase transition.

These are compared with the eigenvalue distribution for the fuzzy sphere from [5], shown in figure 15(c). This is a type spectral triple, having signature , and has exactly the same spectrum as the Dirac operator on the Riemannian round but with a maximum eigenvalue cut-off and fermion doubling. The distributions are remarkably similar, the main differences being the gap at the origin, the size of which depends on the distance from the phase transition, and the fact that the fuzzy sphere has exactly integer eigenvalues with multiplicity, due to its rotational symmetry. The feature that is common to the plots is the approximately linear increase of the eigenvalue density with eigenvalue that is characteristic of Riemannian manifolds of dimension two, i.e., in (5). The simulations show that decreasing further increases the gap in the middle of the spectrum whereas the middle of the spectrum fills up for values of greater than the critical value.

(a) Type ,
(b) Type ,

(c) Type (1,3) Fuzzy
Figure 15: Eigenvalue distributions near the phase transition compared with the fuzzy sphere. Matrix size .

These results are somewhat preliminary and we have not yet carried out a systematic study of the phase transition.

The study of random non-commutative geometries gives an insight into the closely-related problem of the quantization of this fascinating theory. A quantized non-commutative space is a potential candidate for a quantum theory of gravitational interactions and will allow a better understanding of fundamental interactions. Independent of these physical applications, it also is an interesting modification of the well-known matrix models, introducing non-trivial interactions and observables among several matrices. The results reported here will be a basis for further investigations into the phase transition, the continuum limit, and other possible actions on this space of geometries.

6 Acknowledgements

This work of JWB on this project was supported by STFC Particle Physics Theory Consolidated Grant ST/L000393/1, and LG was supported by funding from the European Research Council under the European Union Seventh Framework Programme (FP7/2007-2013) / ERC Grant Agreement n.306425 “Challenging General Relativity”.

This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation.

The dissemination of this work was aided by EU COST Action MP1405 ‘Quantum structure of spacetime’.

Appendix A Dirac operators for the fuzzy geometries we examined

In this appendix matrices will be Hermitian and matrices anti-Hermitian. The matrices are not assumed to be traceless, though the actions are independent of trace part of these matrices. The bracketing convention for the trace of an expression is and , but .

a.1 The geometry

(45)
(46)
(47)
(48)

a.2 The geometry

(49)
(50)
(51)
(52)

a.3 The geometry

(53)
(54)

The gamma matrix trace identities are and and .

(55)
(56)
(57)
(58)
(59)
(60)
(61)

a.4 The geometry

(62)
(63)

The gamma matrix trace identities are , where and and .

(64)
(65)
(66)
(67)
(68)
(69)
(70)

a.5 The geometry

(71)
(72)

The gamma matrix trace identities are and and .

(73)
(74)
(75)
(76)
(77)