A Second Order Non-Smooth Variational Modelfor Restoring Manifold-Valued Images

# A Second Order Non-Smooth Variational Model for Restoring Manifold-Valued Images

Miroslav Bačák111Max Planck Institute for Mathematics in the Sciences, Inselstr. 22, 04103 Leipzig, Germany, bacak@mis.mpg.de    Ronny Bergmann222Department of Mathematics, Technische Universität Kaiserslautern, Paul-Ehrlich-Str. 31, 67663 Kaiserslautern, Germany, bergmann, steidl@mathematik.uni-kl.de.    Gabriele Steidl222Department of Mathematics, Technische Universität Kaiserslautern, Paul-Ehrlich-Str. 31, 67663 Kaiserslautern, Germany, bergmann, steidl@mathematik.uni-kl.de.    Andreas Weinmann333Department of Mathematics, Technische Universität München and Fast Algorithms for Biomedical Imaging Group, Helmholtz-Zentrum München, Ingolstädter Landstr. 1, 85764 Neuherberg, Germany, andreas.weinmann@tum.de.
October 26, 2016
###### Abstract

We introduce a new non-smooth variational model for the restoration of manifold-valued data which includes second order differences in the regularization term. While such models were successfully applied for real-valued images, we introduce the second order difference and the corresponding variational models for manifold data, which up to now only existed for cyclic data. The approach requires a combination of techniques from numerical analysis, convex optimization and differential geometry. First, we establish a suitable definition of absolute second order differences for signals and images with values in a manifold. Employing this definition, we introduce a variational denoising model based on first and second order differences in the manifold setup. In order to minimize the corresponding functional, we develop an algorithm using an inexact cyclic proximal point algorithm. We propose an efficient strategy for the computation of the corresponding proximal mappings in symmetric spaces utilizing the machinery of Jacobi fields. For the -sphere and the manifold of symmetric positive definite matrices, we demonstrate the performance of our algorithm in practice. We prove the convergence of the proposed exact and inexact variant of the cyclic proximal point algorithm in Hadamard spaces. These results which are of interest on its own include, e.g., the manifold of symmetric positive definite matrices.

\setkomafont

sectioning \setkomafonttitle

#### Keywords.

manifold-valued data, second order differences, TV-like methods on manifolds, non-smooth variational methods, Jacobi fields, Hadamard spaces, proximal mappings, DT-MRI

## 1 Introduction

In this paper, we introduce a non-smooth variational model for the restoration of manifold-valued images using first and second order differences. The model can be seen as a second order generalization of the Rudin-Osher-Fatemi (ROF) functional [60] for images taking their values in a Riemannian manifold. For scalar-valued images, the ROF functional in its discrete, anisotropic penalized form is given by

 12∥f−u∥22+λ∥∇u∥1,λ>0,

where is a given noisy image and the symbol is used to denote the discrete first order difference operator which usually contains the forward differences in vertical and horizontal directions. The frequently used ROF denoising model preserves important image structures as edges, but tends to produce staircasing: instead of reconstructing smooth areas as such, the reconstruction consists of constant plateaus with small jumps. An approach for avoiding this effect incorporates higher order differences, respectively derivatives, in a continuous setting. The pioneering work [15] couples the TV term with higher order terms by infimal convolution. Since then, various techniques with higher order differences/derivatives were proposed in the literature, among them [13, 17, 21, 22, 36, 43, 45, 46, 50, 61, 62, 63]. We further note that the second-order total generalized variation was extended for tensor fields in [72].

In various applications in image processing and computer vision the functions of interest take values in a Riemannian manifold. One example is diffusion tensor imaging where the data lives in the Riemannian manifold of positive definite matrices; see, e.g., [7, 14, 53, 65, 75, 78]. Other examples are color images based on non-flat color models [16, 40, 41, 73] where the data lives on spheres. Motion group and -valued data play a role in tracking, robotics and (scene) motion analysis and were considered, e.g., in [28, 52, 55, 58, 70]. Because of the natural appearance of such nonlinear data spaces, processing manifold-valued data has gained a lot of interest in applied mathematics in recent years. As examples, we mention wavelet-type multiscale transforms [35, 54, 76], robust principal component pursuit on manifolds [37], and partial differential equations [19, 32, 69] for manifold-valued functions. Although statistics on Riemannian manifolds is not in the focus of this work, we want to mention that, in recent years, there are many papers on this topic.

In [30, 31], the notion of total variation of functions having their values on a manifold was investigated based on the theory of Cartesian currents. These papers extend the previous work [29] where circle-valued functions were considered. The first work which applies a TV approach of circle-valued data for image processing tasks is [66, 67]. An algorithm for TV regularized minimization problems on Riemannian manifolds was proposed in [44]. There, the problem is reformulated as a multilabel optimization problem which is approached using convex relaxation techniques. Another approach to TV minimization for manifold-valued data which employs cyclic and parallel proximal point algorithms and does not require labeling and relaxation techniques was given in [77]. In a recent approach [34] the restoration of manifold-valued images was done using a smoothed TV model and an iteratively reweighted least squares technique. A method which circumvents the direct work with manifold-valued data by embedding the matrix manifold in the appropriate Euclidean space and applying a back projection to the manifold was suggested in [59]. This can be also extended to higher order derivatives since the derivatives (or differences) are computed in the Euclidean space.

Our paper is the next step in a program already consisting of a considerable body of work of the authors: in [77], variational models using first order differences for general manifold-valued data were developed. In [9], variational models using first and second order differences for circle-valued data were introduced. Using a suitable definition of second order differences on the circle the authors incorporate higher order differences into the energy functionals to improve the denoising results for circle-valued data. Furthermore, convergence for locally nearby data is shown. Our paper [11] extends this approach to product spaces of arbitrarily many circles and a vector space, and [10] to inpainting problems. Product spaces are important for example when dealing with nonlinear color spaces such as HSV.

This paper continues our recent work considerably by generalizing the combined first and second order variational models to general symmetric Riemannian manifolds. Besides cyclic data this includes general -spheres, hyperbolic spaces, symmetric positive definite matrices as well as compact Lie groups and Grassmannians. First we provide a novel definition of absolute second order differences for data with values in a manifold. The definition is geometric and particularly appealing since it avoids using the tangent bundle for its definition. As a result, it is computationally accessible by the machinery of Jacobi fields which, in particular, in symmetric spaces yields rather explicit descriptions – even in this generality. Employing this definition, we introduce a variational model for denoising based on first and second order differences in the Riemannian manifold setup. In order to minimize the corresponding functional, we follow [9, 11, 77] and use a cyclic proximal point algorithm (PPA). In contrast to the aforementioned references, in our general setup, no closed form expressions are available for some of the proximal mappings involved. Therefore, we use as approximate strategy, a subgradient descent to compute them. For this purpose, we derive an efficient scheme. We show the convergence of the proposed exact and inexact variant of the cyclic PPA in a Hadamard space. This extends a result from [4], where the exact cyclic PPA in Hadamard spaces was proved to converge under more restrictive assumptions. Note that the basic (batch) version of the PPA in Hadamard spaces was introduced in [3]. Another related result is due to S. Banert [6], who developed both exact and inexact PPA for a regularized sum of two functions on a product of Hadamard spaces. In the context of Hadamard manifolds, the convergence of an inexact proximal point method for multivalued vector fields was studied in [74].

In this paper we prove the convergence of the (inexact) cyclic PPA under the general assumptions required by our model which differs from the cited papers.

Our convergence statements apply in particular to the manifold of symmetric positive definite matrices. Finally, we demonstrate the performance of our algorithm in numerical experiments for denoising of images with values in spheres as well as in the space of symmetric positive definite matrices.

Our main application examples, namely -spheres and manifolds of symmetric positive definite matrices are, with respect to the sectional curvature, two extreme instances of symmetric spaces. The spheres have positive constant curvature, whereas the symmetric positive definite matrices are non-positively curved. Their geometry is totally different, e.g., in the manifolds of symmetric positive definite matrices the triangles are slim and there are no cut locus which means that geodesics are always shortest paths. In -spheres however, every geodesic meets a cut point and triangles are always fat, meaning that the sum of the interior angles is always bigger than . In our setup, however, it turns out that the sign of the sectional curvature is not important, but the important thing is the structure provided by symmetric spaces.

The outline of the paper is as follows: We start by introducing our variational restoration model in Section 2. In Section 3 we show how the (sub)gradients of the second order difference operators can be computed. Interestingly, this can be done by solving appropriate Jacobi equations. We describe the computation for general symmetric spaces. Then we focus on -spheres and the space of symmetric positive definite matrices. The (sub)gradients are needed within our inexact cyclic PPA which is proposed in Section 4. A convergence analysis of the exact and inexact cyclic PPA is given for Hadamard manifolds. In Section 5 we validate our model and illustrate the good performance of our algorithms by numerical examples. The appendix provides some useful formulas for the computations. Further, Appendix C gives a brief introduction into the concept of parallel transport on manifolds in order to make our results better accessible for non-experts in differential geometry.

## 2 Variational Model

Let be a complete -dimensional Riemannian manifold with Riemannian metric , induced norm , and geodesic distance . Let denote the Riemannian gradient of  which is characterized for all by

 ⟨∇MF(x),ξ⟩x=DxF[ξ], (1)

where denotes the differential of at , see Appendix C.

Let , , be the unique geodesic starting from with . Further let denote a unit speed geodesic connecting . Then it fulfills , , where denotes the length of the geodesic. We further denote by a minimizing geodesic, i.e. a geodesic having minimal length . If it is clear from the context, we write geodesic instead of minimizing geodesic, but keep the notation of using when referring to all geodesics including the non-minimizing ones. We use the exponential map given by and the inverse exponential map denoted by .

The core of our restoration model are absolute second order differences of points lying in a manifold. In the following we define such differences in a sound way. The basic idea is based on rewriting the Euclidean norm of componentwise second order differences in as , see Fig. 0 (a). We define the set of midpoints between as

 Cx,z\coloneqq{c∈M:c=~γ⌢x,z(12L(~γ⌢x,z)) for any geodesic ~γ⌢x,z}

and the absolute second difference operator by

 d2(x,y,z)\coloneqqminc∈Cx,zdM(c,y),x,y,z∈M. (2)

The definition is illustrated for in Fig. 0 (b). For the manifold , definition (2) coincides, up to the factor , with those of the absolute second order differences in [9]. Similarly we define the second order mixed differences based on as

 d1,1(w,x,y,z)\coloneqqminc∈Cw,y,~c∈Cx,zdM(c,~c),x,y,z,w∈M. (3)

Let . We want to denoise manifold-valued images by minimizing functionals of the form

 E(u)=E(u,f)\coloneqqF(u;f)+αTV1(u)+βTV2(u) (4)

where

 F(u;f) \coloneqq12N,M∑i,j=1dM(fi,j,ui,j)2, (5) αTV1(u) \coloneqqα1N−1,M∑i,j=1dM(ui,j,ui+1,j)+α2N,M−1∑i,j=1dM(ui,j,ui,j+1), (6) βTV2(u) \coloneqqβ1N−1,M∑i=2,j=1d2(ui−1,j,ui,j,ui+1,j)+β2N,M−1∑i=1,j=2d2(ui,j−1,ui,j,ui,j+1) (7) +β3N−1,M−1∑i,j=1d1,1(ui,j,ui,j+1,ui+1,j,ui+1,j+1). (8)

For minimizing the functional we want to apply a cyclic PPA [4, 12]. This algorithm sequentially computes the proximal mappings of the summands involved in the functional. While the proximal mappings of the summands in data term and in the first regularization term are known analytically, see [27] and [77], respectively, the proximal mappings of and are only known analytically in the special case , see [9]. In the following section we deal with the computation of the proximal mapping of . The difference can be treated in a similar way.

## 3 Subgradients of Second Order Differences on M

Since we work in a Riemannian manifold, it is necessary to impose an assumption that guarantees that the involved points do not take pathological (practically irrelevant) constellations to make the following derivations meaningful. In particular, we assume in this section that there is exactly one shortest geodesic joining i.e., is not a cut point of ; cf. [23]. We note that this is no severe restriction since such points form a set of measure zero. Moreover, we restrict our attention to the case where the minimizer in (2) is taken for the corresponding geodesic midpoint which we denote by .

We want to compute the proximal mapping of by a (sub)gradient descent algorithm. This requires the computation of the (sub)gradient of which is done in the following subsections. For the subgradient of coincides with its gradient

 ∇M3d2=(∇Md2(⋅,y,z),∇Md2(x,⋅,z),∇Md2(x,y,⋅))T. (9)

If then is not differentiable. However, we will characterize the subgradients in Remark 3.4. In particular, the zero vector is a subgradient which is used in our subgradient descent algorithm.

### 3.1 Gradients of the Components of Second Order Differences

We start with the computation of the second component of the gradient (9). In general we have for , , see [68], that

 ∇Md2M(x,y)=−2logxy,∇MdM(x,y)=−logxy∥logxy∥x,x≠y. (10)
###### Lemma 3.1.

The second component of in (9) is given for  by

 ∇Md2(x,⋅,z)(y)=logyc(x,z)∥logyc(x,z)∥y. (11)
###### Proof.

Applying (10) for we obtain the assertion. ∎

By the symmetry of both gradients and can be realized in the same way so that we can restrict our attention to the first one. For fixed we will use the notation instead of . Let denote the unit speed geodesic joining and , i.e.,

 γx,υ(0)=x, ˙γx,υ(0)=υ=logxz∥logxz∥xandγx,υ(T)=z, T\coloneqqdM(x,z).

We will further need the notation of parallel transport. For readers which are not familiar with this concept we give a brief introduction in Appendix C. We denote a parallel transported orthonormal frame along by

 {Ξ1=Ξ1(t),…,Ξn=Ξn(t).} (12)

For we use the special notation .

###### Lemma 3.2.

The first component of in (9) is given for by

 (13)
###### Proof.

For defined by we are looking for the coefficients in

 ∇MF(x)=n∑k=1akξk. (14)

For any tangential vector we have

 ⟨∇MF(x),η⟩x=DxF[η]=n∑k=1ηkak. (15)

Since , with we obtain by the chain rule

 DxF[η]=(Dc(x)f∘Dxc)[η].

Now the differential of is determined by

 Dxc[η]=n∑k=1ηkDxc[ξk]∈Tc(x)M.

Then it follows by (1) and (10) that

 DxF[η] =Dc(x)f[Dxc[η]]=⟨∇Mf(c(x)),Dxc[η]⟩c(x) =⟨−logc(x)y∥logc(x)y∥c(x),n∑k=1ηkDxc[ξk]⟩c(x),

and consequently

 (16)

By (15) and (14) we obtain the assertion (13). ∎

For the computation of we can exploit Jacobi fields which are defined as follows: for , let , , denote the unit speed geodesic with  and . Let denote the tangential vector in of the geodesic joining and . Then

 Γk(s,t)\coloneqqexpσx,ξk(s)(tζk(s)),s∈(−ε,ε),t∈[0,T],

with small are variations of the geodesic and

 Jk(t)\coloneqq∂∂sΓk(s,t)∣∣s=0,k=1,…,n,

are the corresponding Jacobi field along . For an illustration see Fig. 2. Since and for all we have

 Jk(0)=ξk,Jk(T)=0,k=1,…,n. (17)

Since we conclude by definition (68) of the differential

 Dxc[ξk]=dds(c∘σx,ξk)(s)∣∣s=0=ddsΓk(s,T2)∣∣s=0=Jk(T2),k=1,…,n.

Any Jacobi field of a variation through fulfills a linear system of ordinary differential equations (ODE) [42, Theorem 10.2]

 D2dt2J+R(J,˙γx,υ)˙γx,υ=0, (18)

where denotes the Riemannian curvature tensor defined by . Here is the Lie bracket, see Appendix C. Our special Jacobi fields have to meet the boundary conditions (17). We summarize:

###### Lemma 3.3.

The vectors , , in (13) are given by

 Dxc[ξk]=Jk(T2),k=1,…,n, (19)

where are the Jacobi fields given by

 D2dt2Jk+R(Jk,˙γx,υ)˙γx,υ=0,Jk(0)=ξk,Jk(T)=0.

Finally, we give a representation of the subgradients of in the case .

###### Remark 3.4.

Let with and

 V\coloneqq{ξ\coloneqq(ξx,ξy,ξz):ξx∈TxM,ξy=Jξx,ξz(T2),ξz∈TzM}, (20)

where is the Jacobi field along the unit speed geodesic determined by and . Then the subdifferential at reads

 ∂d2(x,y,z)={αη:η∈V⊥,α∈[lη,uη]}, (21)

where denotes the set of normalized vectors fulfilling

 ⟨ξx,ηx⟩x+⟨ξy,ηy⟩y+⟨ξz,ηz⟩z=0

for all , and the interval endpoints , are given by

 lη =limτ↑0d2(expx(τηx),expy(τηy),expz(τηz))τ, (22) uη =limτ↓0d2(expx(τηx),expy(τηy),expz(τηz))τ. (23)

This can be seen as follows: For arbitrary tangent vectors , sitting in , , respectively, we consider the uniquely determined geodesic variation given by the side conditions as well as We note that which implies for that

 d2(Γ(s,0),Γ(s,T/2),Γ(s,T))=0. (24)

In view of the definition of a subgradient, see [26, 33], it is required, for a candidate , that

 d2(expx(hx),expy(hy),expz(hz))−d2(x,y,z)≥⟨h,η⟩+o(h). (25)

for any sufficiently small . Setting equation (24) tells us that the left hand side above equals up to , and thus, for a candidate ,

 ⟨ξx,ηx⟩x+⟨Jξx,ξz(T2),ηy⟩y+⟨ξz,ηz⟩z=o(h)

for all of small magnitude. Since these are actually linear equations for in the tangent space, we get

 ⟨ξx,ηx⟩x+⟨Jξx,ξz(T2),ηy⟩y+⟨ξz,ηz⟩z=0.

Then, if , the nominators in (22) are nonzero and the limits exist. Finally, we conclude (21) from (25).

In the following subsection we recall how Jacobi fields can be computed for general symmetric spaces and have a look at two special examples, namely -spheres and manifolds of symmetric positive definite matrices.

### 3.2 Jacobi Equation for Symmetric Spaces

Due to their rich structure symmetric spaces have been the object of differential geometric studies for a long time, and we refer to [24, 25] or the books [8, 18] for more information.

A Riemannian manifold is called locally symmetric if the geodesic reflection at each point given by mapping for all geodesics through is a local isometry, i.e., an isometry at least locally near . If this property holds globally, it is called a (Riemannian globally) symmetric space. More formally, is a symmetric space if for any and all there is an isometry on such that and . A Riemannian manifold is locally symmetric if and only if there exists a symmetric space which is locally isometric to . As a consequence of the Cartan–Ambrose–Hicks theorem [18, Theorem 1.36], every simply connected, complete, locally symmetric space is symmetric. Symmetric spaces are precisely the homogeneous spaces with a symmetry at some point . Beyond -spheres and the manifold of symmetric positive definite matrices, hyperbolic spaces, Grassmannians as well as compact Lie groups are examples of symmetric spaces. The crucial property we need is that a Riemannian manifolds is locally symmetric if and only if the covariant derivative of the Riemannian curvature tensor along curves is zero, i.e.,

 ∇R=0. (26)
###### Proposition 3.5.

Let be a symmetric space. Let be a unit speed geodesic and a parallel transported orthonormal frame along . Let be a Jacobi field of a variation through . Set . Then the following relations hold true:

1. The Jacobi equation (18) can be written as

 a′′(t)+Ga(t)=0, (27)

with the constant coefficient matrix .

2. Let be chosen as the initial orthonormal basis which diagonalizes the operator

 Θ↦R(Θ,˙γ)˙γ (28)

at with corresponding eigenvalues , , and let be the corresponding parallel transported frame along . Then the matrix becomes diagonal and (27) decomposes into the ordinary linear differential equations

 a′′i(t)+κiai(t)=0,i=1,…,n. (29)
3. The Jacobi fields

 Jk(t)\coloneqq⎧⎪⎨⎪⎩sinh(√−κkt) Θk(t),ifκk<0,sin(√κkt) Θk(t),ifκk>0,t Θk(t),ifκk=0, (30)

form a basis of the dimensional linear space of Jacobi fields of a variation through fulfilling the initial condition .

Part i) of Proposition 3.5 is also stated as Property A in Rauch’s paper [56] in the particularly nice form “The curvature of a 2-section propagated parallel along a geodesic is constant.” For convenience we add the proof.

###### Proof.

i) Using the frame representation of , (69) and the linearity of in the first argument, the Jacobi equation (18) becomes

 0 =n∑i=1a′′i(t)Θi(t)+ai(t)R(Θi,˙γ)˙γ, (31)

and by taking inner products with further

 0 =a′′j(t)+⟨R(Θi,˙γ)˙γ,Θj⟩ai(t),j=1,…,n. (32)

Now (26) implies for

 0=∇˙γR=n∑k=1r′ik(t)Θk(t)

which is, by the linear independence of the , , only possible if all are constants and we get .

Parts ii) and iii) follow directly from i). For iii) we also refer to [18, p. 77]. ∎

With respect to our special Jacobi fields in Lemma 3.3 we obtain the following corollary.

###### Corollary 3.6.

The Jacobi fields , of a variation through with boundary conditions and fulfill

 Jk(T2)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩sinh(√−κkT2)sinh(√−κkT)Ξk(T2),if κk<0,sin(√κkT2)sin(√κkT)Ξk(T2),if κk>0,12Ξk(T2),if κk=0. (33)
###### Proof.

The Jacobi fields of a variation trough satisfy and by Proposition 3.5 iii) they are given by

 ¯Jk(t)\coloneqq⎧⎪⎨⎪⎩sinh(√−κkt) Ξk(T−t),ifκk<0,sin(√κkt) Ξk(T−t),ifκk>0,t Ξk(T−t),ifκk=0.

In particular we have

 ¯Jk(T)\coloneqq⎧⎪⎨⎪⎩sinh(√−κkT) ξk,ifκk<0,sin(√κkT) ξk,ifκk>0,T ξk,ifκk=0.

Now determines as

 αk=⎧⎪⎨⎪⎩sinh(√−κkT),ifκk<0,sin(√κkT),ifκk>0,T,ifκk=0

and . We notice that the denominators of the appearing fractions are nonzero since and were assumed to be non-conjugate points. Finally, we get (33). ∎

Let us apply our findings for the -sphere and the manifold of symmetric positive definite matrices.

#### The Sphere Sn.

We consider the -sphere . Then, the Riemannian metric is just the Euclidean distance in . For the definitions of the geodesic distance, the exponential map and parallel transport see Appendix A. Let . We choose and complete this to an orthonormal basis of with corresponding parallel frame along . Then diagonalizing the operator (28) is especially simple. Since has constant curvature , the Riemannian curvature tensor fulfills [42, Lemma 8.10] . Consequently,

 R(Θ,˙γ)˙γ=⟨˙γ,˙γ⟩Θ−⟨Θ,˙γ⟩˙γ=Θ−⟨Θ,˙γ⟩˙γ

so that at the vector is an eigenvector with eigenvalue and , , are eigenvectors with eigenvalues . Consequently, we obtain by Lemma 3.3 and Corollary 3.6 the following corollary.

###### Corollary 3.7.

For the sphere and the above choice of the orthonormal frame system, the following relations hold true:

 Dxc[ξ1]=12Ξ1(T2),Dxc[ξk]=sinT2sinTΞk(T2),k=2,…,n.

#### Symmetric Positive Definite Matrices.

Let denote the space of symmetric matrices with (Frobenius) inner product and norm

 ⟨A,B⟩\coloneqqr∑i,j=1aijbij,∥A∥\coloneqq(∑i,j=1a2ij)12. (34)

Let be the manifold of symmetric positive definite matrices. It has the dimension . The tangent space of at is given by , in particular , where denotes the identity matrix. The Riemannian metric on reads

 ⟨η1,η2⟩x\coloneqqtr(η1x−1η2x−1)=⟨x−12η1x−12,x−12η2x−12⟩, (35)

where denotes the matrix inner product (34). For the definitions of the geodesic distance, exponential map, parallel transport see the Appendix B.

Let and let the matrix have the eigenvalues with a corresponding orthonormal basis of eigenvectors in , i.e.,

 v=r∑i=1λivivTi. (36)

We will use a more appropriate index system for the frame (12), namely

 I\coloneqq{(i,j):i=1,…,r;j=i,…,r}.

Then the matrices

 ξij\coloneqq⎧⎨⎩12(vivTj+vjvTi),(i,j)∈Iifi=j,1√2(vivTj+vjvTi),(i,j)∈Iifi≠j, (37)

form an orthonormal basis of .

In other words, we will deal with the parallel transported frame , , of (37) instead of , . To diagonalize the operator (28) at we use that the Riemannian curvature tensor for has the form

 R(Θ,Ξ)Υ=−14x12[[x−12Θx−12,x−12Ξx−12],x−12Υx−12]x12 (38)

with the Lie bracket of matrices. Then

 R(Θ,˙γ)˙γ=−14x12[[x−12Θx−12,x−12˙γx−12],x−12˙γx−12]x12

and for with the right-hand side becomes

 T(θ) =−14x12[[x−12θx−12,x−12υx−12],x−12υx−12]x12 (39) =−14x12(wb2−2bwb+b2w)x12, (40)

where and . Expanding