Commuting Simplicity and Closure Constraints
for 4D Spin Foam Models
Abstract
Spin Foam Models are supposed to be discretised path integrals for quantum gravity constructed from the PlebanskiHolst action. The reason for there being several models currently under consideration is that no consensus has been reached for how to implement the simplicity constraints.
Indeed, none of these models strictly follows from the original path integral with commuting B fields, rather, by some non standard manipulations one always ends up with non commuting B fields and the simplicity constraints become in fact anomalous which is the source for there being several inequivalent strategies to circumvent the associated problems.
In this article, we construct a new Euclidian Spin Foam Model which is constructed by standard methods from the PlebanskiHolst path integral with commuting B fields discretised on a 4D simplicial complex. The resulting model differs from the current ones in several aspects, one of them being that the closure constraint needs special care. Only when dropping the closure constraint by hand and only in the large spin limit can the vertex amplitudes of this model be related to those of the Model but even then the face and edge amplitude differ.
Interestingly, a noncommutative deformation of the variables leads from our new model to the BarrettCrane Model in the case of .
Contents
1 Introduction
Loop Quantum Gravity (LQG) is an attempt to make a background independent, nonperturbative quantization of 4dimensional General Relativity (GR) – for reviews, see [1, 2, 3]. It is inspired by the formulation of GR as a dynamical theory of connections [4]. Starting from this formulation, the kinematics of LQG is wellstudied and results in a successful kinematical framework (see the corresponding chapters in the books [1]), which is also unique in a certain sense [5]. However, the framework of the dynamics in LQG is still largely open so far. There are two main approaches to the dynamics of LQG, they are (1) the Operator formalism of LQG, which follows the spirit of Dirac quantization of constrained dynamical system, and performs a canonical quantization of GR [6, 7]; (2) the Path integral formulation of LQG, which is currently understood in terms of the Spinfoam Models (SFMs) [3, 10, 11, 12, 13]. The relation between these two approaches is wellunderstood in the case of 3dimensional gravity [14], while for 4dimensional gravity, the situation is much more complicated and there are some attempts [15] for relating these two approaches.
The present article is concerned with the following issue in the framework of spinfoam models. The current spinfoam models are mostly inspired by the 4dimensional Plebanski formulation of GR [16] (PlebanskiHolst formulation by including the BarberoImmirzi parameter ), whose action reads
(1.1) 
where is a so(4)valued 2form field, is the curvature of the so(4)connection field and is a densitized tensor, symmetrized under interchanging and , and traceless . For the illustrative purposes of this article, we consider only Euclidean GR in the present article, however, the lessons learnt will extend also to the Lorentzian theory. One can show that the equations of motion implied by the PlebanskiHolst action are equivalent to the Einstein equations of GR. Moreover, if we consider formally the following path integral partition function of the PlebanskiHolst action and perform the integral of
(1.2) 
we obtain the partition function of BF theory [17] whose paths are, however, constrained by 20 Simplicity Constraint equations
(1.3) 
The point of this formulation is of course that the path integral of BF theory has been formulated as a concrete spinfoam model (subject to the divergence issue, see the corresponding chapters in [1]) and thus the idea is to rely on those results and to implement the simplicity constraints properly into the partition function of BF theory. We remark that even for Euclidian gravity, the partition function (1.2) is unlikely to be derived from the canonical formulation because of the presence of second class constraints which affect the choice of the measure in (1.2), see the first and third reference in [15] for a detailed discussion. Since in current spin foam models the proper choice of measure is also regarded as a nontrivial problem and as we want to draw attention to a different issue for the current spin foam models, we also will not deal with the measure issue in this article and leave this for future research.
The partition function of BF theory, after discretization on a 4dimensional simplicial complex and its dual complex , can be expressed as a sum over certain spinfoam amplitudes. Here a spinfoam amplitude is obtained by (1) assigning an SO(4) unitary irreducible representation to each triangle of (we label the representation by a pair for each triangle); (2) assigning a 4valent SO(4) intertwiner to each tetrahedron of (we label the intertwiner by a pair for each tetrahedron). Then the partition function of BF theory can be written as
(1.4) 
where the symbol is the 4simplex/vertex amplitude corresponding to the 4simplex . The partition function turns out to be formally independent of the triangulation . Clearly, as shown explicitly in Eq.(1.2), in order to obtain the partition function for quantum gravity as a sum of spinfoam amplitudes, one has to impose the simplicity constraint in the BF theory measure. When doing that, the resulting partition function is no longer triangulation independent^{1}^{1}1As it should not be because GR is not a TQFT in the classical level. Triangulation independence is understood as a feature in the quantization of classical TQFT, which should not be expected in the quantization of gravity. and thus one should in fact consider all possible discretizations and not only simplicial ones. This is also necessary in order to make contact with the canonical LQG Hilbert space which contains all possible graphs and not only 4valent ones. This has been recently emphasised in [8, 9] and the current spin foam models already have been generalised in that respect. We believe our model also to be generalisable but will not deal with this aspect in the present work as this would draw attention away from our main point.
Essentially, the very method of imposing the simplicity constraint defines the corresponding candidate spinfoam model for quantum gravity which why its proper implementation deserves so much attention. Currently the three most studied spinfoam models for quantum gravity (in Plebanski or PlebanskiHolst formulation) are the BarrettCrane Model [10], the EPRL Model [11], and Model [12]. These three, a priori, different models are defined by three different ways to impose simplicity constraint on the measure of the BF partition function . We will review these different methods of imposing the simplicity constraint briefly in what follows.
First of all, in the context of the discretized path integral, the simplicity constraint also takes a discretized expression. For each triangle we define an so(4) Lie algebra element which corresponds to the integral of the two form over the triangle . Then in terms of the for each 4simplex the discretised simplicity constraints read
(1.5)  
(1.6) 
The BarrettCrane Model, the EPRL Model, and the Model all explicitly impose the first type of simplicity constraint Eq.(1.5), called tetrahedron constraint, in some way to the spinfoam partition function of BF theory. On the other hand, all of them replace the second type of simplicity constraint, called 4simplex constraint Eq.(1.6) by the so called Closure Constraint
(1.7) 
It is not difficult to see that the closure constraints together with the tetrahedron constraints imply the 4simplex constraints but not vice versa. Thus, sprictly speaking, imposing the closure constraint constrains the BF measure more than the classical theory would precribe. It is unknown and also beyond the scope of the present paper whether this replacement is harmless or is in conflict with the classical theory. In this paper, as we are merely interested in comparing the standard way of imposing the simplicity constraints (commuting B fields) with the non standard methods defining the BC, EPRL and FK models (non commuting B fields), we proceed as in those other spin foam models and also replace the 4simplex constraint by the closure constraint. To distinguish these two different types of constraints, in what follows we use the terminology “simplicity constraint” for Eq.(1.5) and “closure constraint” for Eq. (1.7). Notice that the BC Model, EPRL Model, and Model argue that the closure constraint is “automatically” implemented in their spinfoam amplitude. We will come back to this argument in a moment. Because of that argument, in none of these models the closure constraint is further analysed. The proper implemementation of the simplicty and closure constraints is one of the most active research areas in the spin foam model community and there are many issues that yet have to be understood [18].
For both the BarrettCrane Model and EPRL Model, the strategy for imposing the simplicity constraint is the following: In order to take advantage of the knowledge of BF spinfoam Model, one formally takes the delta distribution on the B variables out of the integral over B by a standard trick known from ordinary quantum field theories: One (formally) just has to replace by because the integrand of the B integral is of the form . Due to the discretization upon which is replaced by a holonomy around a face of the dual triangulation and B by an integral over a triangle of the triangulation, can be rewritten in terms of the right invariant vector fields on the copy of corresponding to the given holonomy with holonomy dependent coefficients. One now argues that these coefficients can be replaced by their chromatic evaluation (setting the holonomy equal to unity) because the integration over leads to enforcing the measure on the space of connections to be supported on flat ones. Clearly, this argument is not obviously water tight because may not be supported at . In fact it should not be if we are interested in gravity rather than BF theory. See the chapter on spinfoams in the second reference of [1] for more details. In any case, this way of proceeding now leads to replacing the commutative derivations by the non commutative right invariant vector fields .
An alternative argument that has been given is the following: The kinetical boundary Hilbert space of the spin foam path integral should be the canonical LQG Hilbert space (restricted to the 4valent boundary graph of the given simplicial triangulation) and here the field would be quantised as where is the underlying connection. On functions of holonomies this again becomes a right invariant vector field labelled by the triangles dual (in the 3D sense) to the corresponding boundary edges which in turn correspond to the faces of the dual triangulation dual (in the 4D sense) to those triangles. The physical boundary Hilbert space should therefore be the kernel of that quantised boundary simplicity constraints. In order to write the corresponding spin foam model, one has to define the projector on that physical Hilbert space. To do this properly, one should canonically quantize Plebanski – Holst gravity, identify all the first and second class constraints and define the projector via Dirac bracket and group averaging which then leads to a spin foam path integral. How complicated this becomes if one really performs all the necessary steps is outlined in [15]. However, this is not what is done in [11]. The first observation is that since the spin foam path integral naturally involves SO(4), the kinematical boundary Hilbert space is naturally also in terms of SO(4) spin network functions. One now studies the restrictions that the simplicity constraints impose on the spins and intertwiners of the boundary SO(4) Hilbert space spin network functions. The detailed structure of these restrictions suggests a natural one to one map with spin network states in the canonical SU(2) Hilbert space. Finally, using locality arguments, one conjectures that these restrictions should not only hold on the boundary but also in the bulk of the BF SO(4) spin foam model. See [35] for a particularly simple and clear exposition of this procedure. It has recently been criticised in [18] on the ground that the BF symplectic structure and the LQG symplectic structure have wrongly been identified in the afore mentioned identification map.
In any case, whether or not the map is the correct correpondence, the simplicity constraints were again quantised as non commuting (anomalous) constraints. If one understands the kernel in the strong operator topology then one obtains the BC model, if one understands it in the weak operator topology (Gupta – Bleuler procedure) one obtains the EPRL model. Because of the anomaly, imposing the constraint operators strongly apparently makes the BarrettCrane Model lose some important information about nondegenerate quantum geometry [19]. Imposing the constraints weakly is less restrictive and thus may lead to a better behaved model. More in detail, first of all the quadratic expression of the simplicity constraint Eq.(1.5) is replaced by a linearized expression. It is given by asking that for each tetrahedron , there exists a unit vector , such that
(1.8) 
The equivalence of the linearized simplicity constraint Eq.(1.8) with original simplicity constraint Eq.(1.5) will be reviewed in Section 3 (in the gravitational sector of the solution). In the original construction of EPRL spinfoam model in [11], the unit vector is gauge fixed to be , and a “Master constraint” is defined (to replace the crossdiagonal part of the simplicity constraint Eq.(1.5)), where from Eq.(1.8). The corresponding “Master constraint operator” is defined by replacing by right invariant derivatives. This Master constraint solves the problem of noncommutativity/anomaly of the quantum simplicity to a certain extent, because a single Master constraint replaces all the crossdiagonal components of Eq.(1.5). Moreover the diagonal part of Eq.(1.5) and this Master constraint operator restrict the Hilbert space spanned by the 4valent SO(4) spinnetworks to its subspace, which can be identified with 4valent SU(2) spinnetworks and thus can be imbedded into the kinematical Hilbert space of LQG. For each of these SU(2) spinnetworks, the SU(2) unitary irreducible representations labelled by has the following relation with the original SO(4) representations on all the boundary edges dual to the boundary triangles
(1.9) 
Here the BarberoImmirzi parameter can only take discrete values, i.e.
(1.10) 
More importantly, the recent results in [20, 9] show that the boundary Hilbert space used in the EPRL Model solves the linear version of simplicity constraint Eq.(1.8) (and the closure constraint Eq.(1.7)) weakly, i.e. the matrix elements (with respect to the boundary SO(4) Hilbert space) of the constraint operators vanish on the space of solutions
(1.11) 
in contrast to the strong implementation of the constraints in the BarrettCrane Model. Finally the (Euclidean) EPRL spinfoam partition function is expressed by
(1.12) 
where for each spinfoam amplitude, an SU(2) unitary irreducible representation is assigned to each triangle , satisfying the relation Eq.(1.9), and an SU(2) 4valent intertwiner is assigned to each tetrahedron . Here
(1.13) 
is the 4simplex/vertex amplitude for the EPRL Model, where are a fusion coefficients defined in [11].
The Model follows a different strategy to impose the simplicity constraint, namely by using the coherent states for SU(2) group [21, 22]. Given a unitary irreducible representation space of SU(2), the coherent state is defined by
(1.14) 
We then immediately have the resolution of identity on
(1.15) 
This coherent state has a certain geometrical interpretation, which can be seen by computing the expectation value of the su(2) generator ( are Pauli matrices)
(1.16) 
If we identify the Lie algebra su(2) with , we can see that the coherent state describes a vector in with length , its direction is determined by the action of on a unit reference vector (the direction of ). From the expression we see that can be parameterized by the coset . In addition, the integral in the resolution of identity is essentially over . It is not hard to show that the (Euclidean) BF partition function can be expressed in terms of the coherent states (we write for each SO(4) element, for an SO(4) unitary irreducible representation)
(1.17)  
where is a SO(4) holonomy along the edge from the center of 4simplex to the center of tetrahedron . Then the strategy of imposing simplicity constraint in Model is to use the interpretation (1.16) of the coherent state labels as the selfdual/antiselfdual part of the so(4) variable associated with a triangle seen from a tetrahedron . (More precisely, we know that the previously defined can be decomposed into selfdual and antiselfdual part . The interpretations of , are considered as the parallel transport of from the center of triangle to the center of tetrahedron , i.e. , where is the holonomy along the edge from the center of triangle to the center of tetrahedron ). That is, the simplicity constraint is imposed on the coherent state labels, which results in the following restrictions:
(1.18) 
where is some normal to , takes values in the U(1) subgroup of SU(2) generated by and . In more detail, the proposal is then to simply replace in (1.17) by these expressions and the Haar measure by the Haar measure . We emphasize that this is an interesting but non standard procedure: while the identification of the coherent state labels with the so(4) variables is certainly well motivated, the resulting expression does not arise by integrating out the fields in the presence of the delta distributions enforcing the simplicit constraints. Rather, in (1.17) the B fields have already been integrated out. To restrict measure and integrand by hand afterwards according to (1.18) is not obviously equivalent with the standard procedure of solving the distributions. One would hope that the resulting procedures coincide in the semiclassical or the “large” limit [23]. Indeed, the “large” limit result in Section 4 will support this expectation. Finally the spinfoam partition function of Model coincides (at least up to a slight change of edge amplitude) with EPRL partition function when the BarberoImmirzi parameter . However when or , partition function is rather different from the EPRL partition function. Here we only show explicitly the 4simplex/vertex amplitude of model when or
(1.19) 
Here although the relation between
(1.20) 
is the same as in EPRL Model, in model for or , there are some additional degrees of freedom associated with the label , which are the values of spins from the coupling of and , i.e. could take values in . The final partition function is obtained by summing over , , and with some measure factors (see [12] for details).
In the previous three paragraphs, we briefly revisited the main strategies of imposing simplicity
constraint in BarrettCrane, EPRL and Models. We have seen that these in general
different spinfoam models came from two different ways of imposing simplicity constraint, i.e.
BarrettCrane and EPRL Model quantize the simplicity constraint as operators and imposed them
(strongly or weakly) on the boundary spinnetworks, while Model imposes the
constraint on the coherent state labels. However, as we have reviewed, none of the three models is
derived from the original path integral formula
Eq.(1.2) of the Plebanski action (or the discretized version of the path integral) without
using some non standard methods. Therefore a natural question arises:
Is any of those three spinfoam models consistent with the path
integral formula Eq.(1.2) and its discretized version? This question is non trivial
because in all three types of models one deals with non commutative B fields and simplicity constraints
as operators
on some Hilbert space while the original path integral is in terms of commutative cnumber
variables so that anomalies cannot arise.
Because of this issue, it is interesting to investigate
what kind of spinfoam model we will obtain, if we start from the (discretization of) the path integral
formula Eq.(1.2) with commutative variables. It is also interesting to find some possible
bridges linking the (discretization of) the path integral formula Eq.(1.2) with commutative
variables to the existing spinfoam models using noncommutative variables.
In this article, we consider the discretization of the path integral formula Eq.(1.2), which will be Eq.(2.1). As announced in [36], in contrast to the BarrettCrane, EPRL, and Models, we always consider the variables as commutative cnumbers. The simplicity constraint (and closure constraint) is (are) imposed by the cnumber delta functions inserted in the path integral formula, which one gets by integrating over the Lagrange multiplier and which constrain the path integral measure. In our concrete analysis in Section 4, the most important difference between our derivation and the derivation in any of Barrett Crane, EPRL, and Models is the following: in any of BarrettCrane, EPRL, and Models, one always imposes the respective version of the simplicity constraint constraint on the BF spinfoam partition function Eq.(1.4) or (1.17) after integration over . This feature is essentially the reason why it is difficult to find a relation between the simplicity constraint imposed in any of BarrettCrane, EPRL, and Models and the simplicity constraint in the path integral formula Eq.(1.2). By contrast, our derivation in Section 4 will not start from the spinfoam partition function of BF theory, but instead we impose the delta function of the simplicity constraint (and closure constraint) before the integration over , and we will see that solving these constraints gives rise to a non trivial modification of the path integral measure. There were early works analyzing the simplicity constraint toward this direction, see e.g. [26].
As also announced in [36], regarding the variables as commutative cnumbers also makes the treatment of closure constraint different. We know that the closure constraint Eq.(1.7) is necessary in order that the full set of simplicity constraint Eq.(1.5) and (1.6) is satisfied. In BarrettCrane Model the closure constraint is argued to be automatically satisfied by the SO(4) gauge invariance of the vertex amplitude. However, as shown in [36], this is only true after performing the Haar measure integrals which essentially project everything on the gauge invariant sector. It is clear that the closure constraint must be imposed before performing the integral over the connections. In the EPRL Model, the argument is improved in that both simplicity constraint and closure constraint vanishes weakly on the EPRL boundary Hilbert space [20]. Moreover, in [24], it is shown that in both EPRL and Model, the closure constraint can be implemented in terms of geometric quantization and by the commutativity of the quantization and phase space reduction [25]. As defined, an additional closure constraint would be redundant for both EPRL and Model, since they are already on the constraint surface of closure constraint (if one interprets the coherent state labels to be the variables), although the original definitions of both models didn’t impose closure constraint explicitly. We feel that this is again due to the fact that the Haar integrals have already been performed. In our analysis we find that the implementation of closure constraint gives nontrivial restrictions on the measure.
In order to understand what happens when one ignores the clsoure constraint and to follow more closely the procedure followed by existing spin foam models, in section 4, we first consider a simplified partition function in which the delta functions of closure constraint is dropped (as it is discussed in [26]), and derive an expression of as a sum of all possible spinfoam amplitudes (constrained only by the simplicity constraints). Then we also compute the true partition function with the closure constraint implemented. When we compare with the true partition function , we find the closure constraint nontrivially affect the spinfoam expression of partition function. But all the spinfoams (transition channels) admitted in the simplified partition function still contribute to the full partition function (with some changes for the triangle/face amplitude and tetrahedron/edge amplitude).
Another key feature of our derivation is a different discretization of the BF action. Here we first break the faces dual to the triangles into wedges (see FIG.1) and then write the discretized BF action in terms of the holonomies along the boundary of the wedges. Here, as usual, a wedge in the dual face is determined by a dual vertex or original 4simplex and thus denoted by . Its boundary consists of four segments defined as follows: The original (piecewise linear) 4simplex has a barycentre which is the dual vertex. The dual edges connect these barycentres. A pair of dual edges adjacent to the same dual vertex defines a face. Conversely, given a face and a dual vertex which is one of the corners of the face, we obtain two dual edges. These are dual to two tetrahedra of the original complex. The boundary of the wedge is now given by where the hat denotes the respective barycentres. In an unfortunate abuse of notation which exploits the duality one also writes this as . Using this notation we have (cf. FIG.1)
(1.21)  
where is the curvature 2form integrated on the wedge determined by and respectively are the afore mentioned unique tetrahedra (or dual edges). This starting point results in the following structures in the resulting spinfoam model (these structures turn out to be similar to the structure proposed in [26]):

In contrast to the existing spinfoam models, where the SO(4) representations were labeling the faces , the new spinfoam model derived in Section 4 have SO(4) representations labeling the wedges, i.e. a dual face having vertices (corners) in general has different pairs , one for each wedge determined by the vertex dual to . However in the large limit, the triangle/ face amplitude is concentrated on SO(4) representations for any vertices of the same face .

Two neighboring wedges and of a face share a segment (c.f. FIG.1) whose end points are the center of the face and the center of the edge dual to the tetrahedron . For each segment there is an SU(2) representation “mediating” the SO(4) representations on the two neighboring wedges, and , in the sense that has to lie in the range of the joint Clebsh  Gordan decomposition of and (c.f. FIG.4), thus
(1.22)
Note that the idea for implementing cnumber simplicity constraint strongly in the spinfoam model is not new, and has been employed in [26]. Some calculations, e.g. solving the simplicity constraint, toward is similar to the derivation in [26] (especially in the first reference in [26]). However the discrete action Eq.(1.21) here is different from the one used in [26]. The action here turns out to be important to understand the noncommutative deformation and the relation to BarrettCrane Model in Appendix A, which is one of the key points in this paper.
An interesting result from the analysis here is the relations between the new spinfoam model derived here and the existing spinfoam models e.g. BarrettCrane, EPRL, and Models. From the analysis in Section 4, we find that, firstly, in the large and largearea limit the spinfoams in our new model reduces to the spinfoams in Model (with identical 4simplex/ vertex amplitude but different tetrahedron/edge and triangle/face amplitudes) at least for . Secondly, in Appendix A, we study the noncommutative deformation of the partition function Eq.(2.1), in order to study how the noncommutative nature of the variables in the existing spinfoam models emerges in our commutative context. The noncommutative deformation we employ here comes from a generalized Fourier transformation on the compact group [29] (the deformed partition function will be denote by ). With this deformation, we find that the closure constraint really becomes redundant when we set the deformation parameter , while the redundancy is hard to be shown with a general deformation parameter. With the setting of the deformation parameter , we show that the noncommutative deformation of our new spinfoam model leads to BarrettCrane model when the BarberoImmirzi parameter . This result explains how the noncommutative nature of the variables in BarrettCrane model relates to the commutative context of our new spin foam model in Section 4, and also explains to some extent the reason why in the BarrettCrane model the closure constraint is redundant (such an explanation also appears in the first reference of [30] from the group field theory perspective). On the other hand, the relation with EPRL Model and () is still veiled. What we know is that the allowed spinfoams (transition channels) in EPRL Model form a subset of those allowed in our new spinfoam model (with the same 4simplex/vertex amplitude but different different tetrahedron/edge and triangle/face amplitudes) and this fact also holds for Model for any . All above relations between various spinfoam models are summarized in the following diagram, where the sets are the collections of spinfoams (transition channels) which respectively contribute their partition functions :
inclusion  
where means the inclusion in terms of contributing spinfoam amplitudes. We will discuss the details in Section 4.2.
2 Starting Point of the New Model
2.1 The Partition Function
In the last section we reviewed the approaches of simplicity constraint and closure constraint in the existing spinfoam models, and summarized the approach and main results of the present article. In this section, we present the detailed construction and analysis of our new spinfoam model. We take a simplicial complex of the 4dimensional manifold ^{2}^{2}2in most of the discussions of the present paper, the manifold is assumed to be without boundary, then the partition function is a number associated to the triangulation. But the discussion can be easily generalized to the case with a boundary. , where we denote the simplices by , the tetrahedra by and the triangles by . And we take the following discretized partition function as the staring point for constructing the spinfoam model^{3}^{3}3Such a spinfoam partition function can be undertood as a sum over the histories of SO(4) spinnetworks, as we will see in the following discussion.:
(2.1)  
We explain the meaning of the variables appearing in the above definition:

are respectively the selfdual and antiselfdual part of the flux variable , which is the valued 2form field smeared on the triangle dual to while
(2.2) So given two tetrahedra sharing a face , the relation between and is thus
(2.3) where and . Such a “paralleltransportation condition” for means that each triangle associates a unique pair , which ensures the right number of degrees of freedom as a discretization of PlebanskiHolst gravity. are the auxiliary variables which are useful in the following derivation.

is the Haar measure on SU(2). is the self dual and antiselfdual part of the SO(4) holonomy along the half edge outgoing from the vertex while are respectively the selfdual and antiselfdual part of the SO(4) holonomy along the segments (see FIG.1).

The delta function imposes the simplicity constraint for each tetrahedron:
(2.4) while the delta function imposes the selfdual closure constraint for each tetrahedron. Note that there is no closure constraint for because the closure of is implied by the selfdual closure constraint and the simplicity constraint as we will demonstrate shortly. So including it would be equivalent to multiplying the partition function with a divergent constant which drops out in expectation values. In addition, the closure constraint and simplicity constraint Eq.(1.5) imply the 4simplex constraints ():
(2.5) Here and . In the continuum limit of Eqs.(2.4) and (2.5), in which the holonomies can be replaced by the group unit, we recover the Plebanski simplicity constraints (20 equations):
(2.6) where is the 4dimensional volume element. Note that there are essentially 20 constraint equations while the trace part of Eq.(2.6) is an identity. The solutions of the simplicity constraints is well known: given a nondegenerate cotetrad , there are five sectors of solutions of the simplicity constraints [3]
(2.7) where are the selfdual and antiselfdual parts of .

The exponentials in come from the exponential of the BF action, discretized in terms of wedge holonomies . In more detail,
(2.8) where is the curvature 2form integrated on the wedge determined by

Finally we note that under the SO(4) gauge transformations:
(2.9) where denotes a gauge transformation and with the barycenter of etc.
Hence the traces of the exponentials
(2.10) and the simplicity constraint
(2.11) are invariant quantities while the closure constraint transforms covariantly
(2.12) The desire to maintain gauge (co)invariance of action and constraints in the discretisation motivated to introduce the quantities and which in the continuum limit reduce to to leading order in the discretisation regulator.

One may wonder why we do not include functions enforcing the closure constraint for the “minus” sector. As we will see, the measure is supported on configurations satisfying for some SU(2). Thus
(2.13) is already implied by the “Plus” sector. So we could include it but that would result in an infinite constant which drops out in correlators. We assume to have done this already.
Remark:
It appears awkward, that here are more holonomies than B fields, suggesting a mismatch in the number of
and degrees of freedom in contrast to the classical theory.
Here we remark that the natural definition of the dual of a triangle really is the gluing of wedges
(see e.g. the second reference of [1] in the notation used here and references therein).
The boundary is naturally a composition of the half edges
where the hat denotes the barycentre of tetrahedron and 4simplex respectively. Thus, if we would
discretize the action using the holonomy around the rather than around the wedges,
the discretized action only would depend on the
edges
and the properties of the Haar measure ensure that the inegrals over
reduce to the integrals over . Thus, what we are doing here is to approximate
by where
is the corresponding wedge holonomy
after having introduced the redundant variables . We are aware that this
presents a further modification of the model but it should be a mild one because both discretised actions
have the same continuum limit. In fact we will see that in the semiclassical (large) limit the
representations on the wedges essentially coincide so that effectively only the face holonomies are of
relevance. It is certainly possible to define the commutative B field model without this step, however,
it is very helpful to do so as it facilitates the solution to otherwise cumbersome bookkeeping problems.
We leave the definition of the model without a priori introduction of wedges for future work.
2.2 Expansion of The Exponentials
For the preparation of the integration of the holonomies amd , we would like to expand the factors in terms of the SU(2) unitary irreducible representation matrix elements . So we define the matrix , , such that
(2.14) 
while the expression of can be obtained by
(2.15) 
Since