Quantum Geometry of Measurements

# Quantum Information Geometry in the Space of Measurements

Warner A. Miller Department of Physics, Florida Atlantic University, Boca Raton, FL, 33431
###### Abstract.

We introduce a new approach to evaluating entangled quantum networks using information geometry. Quantum computing is powerful because of the enhanced correlations from quantum entanglement. For example, larger entangled networks can enhance quantum key distribution (QKD). Each network we examine is an -photon quantum state with a degree of entanglement. We analyze such a state within the space of measured data from repeated experiments made by observers over a set of identically-prepared quantum states – a quantum state interrogation in the space of measurements. Each observer records a if their detector triggers, otherwise they record a . This generates a string of ’s and ’s at each detector, and each observer can define a binary random variable from this sequence. We use a well-known information geometry-based measure of distance that applies to these binary strings of measurement outcomes [Rokhlin:1959, Rajski:1960, Zurek:1989], and we introduce a generalization of this length to area, volume and higher-dimensional volumes [QUIG:1990]. These geometric equations are defined using the familiar Shannon expression for joint and mutual entropy [Shannon:1948]. We apply our approach to three distinct tripartite quantum states: the state, the state, and a separable state . We generalize a well-known information geometry analysis of a bipartite state to a tripartite state. This approach provides a novel way to characterize quantum states, and it may have favorable scaling with increased number of photons.

## 1. Introduction

“No elementary quantum phenomenon is a phenomenon until it is brought to close by an irreversible act of amplification.” This Niels Bohr-inspired quantum adage of John Archibald Wheeler, together with the Principle of Complementarity, is at the very heart of Wheeler’s It-from-Bit framework [Wheeler:1990]. In this manuscript, we explore entanglement networks within this information-centric approach. The quantum network we consider in this manuscript is a quantum state with photons with varying degrees of entanglement. Observers examine the space of measured data from repeated experiments on a set of identically-prepared quantum states. Each observer records a if their detector triggers, otherwise a ”0” is recorded. This generates a string of ’s and ’s at each detector as illustrated in Fig. 1. The string of numbers can be represented by a binary random variable. The observers may have more than one detector, and therefore each observer may acquire more than one binary random variable. Once these random variables are formed, we can apply an information geometry measure of distance, area, volume and n-volumes to the network of observers [Rokhlin:1959, Rajski:1960, QUIG:1990, Zurek:1989]. These measures are defined using the familiar Shannon expression for mutual and conditional entropy [Shannon:1948]. This will be discussed in Sec. 2. In Sec. 4, we apply our approach to three distinct tripartite quantum states: the state, the state, and a separable state . This novel approach provides us with a natural generalization an information geometry model of Bell’s inequality for a bipartite singlet state to a similar analysis tripartite states [Schumacher:1991]. We provide a brief review of this 2-photon geometry in Sec. 3 and its generalization in Sec. 5.

The power of quantum computing stems from a network of quantum entanglement, and larger networks can enhance quantum key distribution (QKD)[Jozsa:2003, Piani:2009, Regula:2016]. Consequently, a primary focus of this field is to find a scalable method to characterize a quantum state, e.g. a measure of its entanglement, and entanglement quality. Such measures have been elusive [Regula:2016, Horodecki:2009]. Quantum state tomography is impractical as it involves analyzing an exponentially large matrix. This difficulty is not so surprising as the very power of quantum computing lies with this exponential scaling. We seek a scalable information geometry entanglement measure that is grounded solely upon the space of measured data from repeated experiments — a quantum state interrogation in the space of measurements. This It-from-Bit approach is based on a projection of the quantum world by measurement, (i.e. an irreversible act of amplification) onto a classical world of bit strings of data (bits). After all, any quantum information processing system must “meet” the classical world of information to communicate its information content. It is this network of detectors that we analyze within the “It-from-Bit” framework. The usual conceptual ambiguities that may accompany the quantum measurement process are minimized within this approach; nevertheless, the non-classical and non-intuitive features of the quantum remain. The uniqueness of quantum phenomenon are now encoded in the correlations of our observers bits. The two questions we ask are: “Can information geometry provide us with a better understanding of the entanglement structure and function in quantum networks?”; and “Can this information geometry approach scale favorably with larger multipartite systems?”.

Scalability is the single salient sign signaling a superior quantum strategy. Quantum information processing is driven by exponential scaling, and this must be a concern for any approach to measure or characterize entanglement. For this reason, our approach will try to emulate the exponential scaling suggested by Vedral [Vedral:1997]. The scalability of this approach through the Quantum Sanov’s Theorem [Vedral:1997] shows that the fidelity of distinguishing two quantum density matrices from improves exponentially with the number of measurements, ,

 (1)

Here, is a relative entropy. We are animated by this “information thermodynamic” structure, although we are far from making progress on this front. We outline here an approach to the answers to our two questions that is based on quantum state interrogation in a space of measurements. We introduce potential measures that utilize quantum information geometry, and look forward in the future to verify if our “large ” thermodynamic-like collection of bits from measurements share the structure of the exponential scaling suggested by Vedral [Vedral:1997].

## 2. Outline of Information Geometry: From Distances to Area and Volumes

Information geometry is defined through the entropy of a network of random variables. For example if we examine the binary outcomes of Alice’s detector in Fig. 1, we can obtain the probability distribution for Alice’s () detector labeled . In particular, we separately summing the number of times the detector fires and gives a , and the number non-detections that she measured. We then divide each of these by the total number of measurements. In this way we can assign a binary random variable, , to Alice’s 19 measurements (Fig. 1).

 (2) A={0with probability9/191with probability10/19.

Whereas probability measures uncertainty about the occurrence of a single event, entropy provides a measure the uncertainty of a collection of events. If is a -state random variable, then

 (3) (Entropyof Xi)=HXi:=−s∑χi=1p(xi)logp(xi).

Here is the probability that the random variable has the value . In this manuscript, we use only binary random variables (). The entropy is the largest when our uncertainty of the value of the random variable is complete (e.g. uniform distribution of probabilities), and the entropy is zero if the random variable always takes on the same value,

 (4) 0≤HXi≤log(s).

In this sense, entropy is a measure of our ignorance. We will make use of the mutual entropy and conditional entropy over an ensemble of random variables. The mutual entropy is defined over the joint probability distributions,

 (5) HABC=−∑i,j,kp(ai,bj,ck)logp(ai,bj,ck),

and the conditional entropy is defined over conditional probability distributions,

 (6) HA|B =−∑i,jp(bj)logp(ai|bj), (7) HA|BC =−∑i,j,kp(bj,ck)logp(ai|bj,ck).

Here the probability that , and is the joint probability , and the probability that given that we know a priori that and is the conditional probability , and these are related by

 (8) p(ai|bjck)=p(ai,bj,ck)p(bj,ck).

We use use an extension of the Shannon-based information distance defined by Rokhlin[Rokhlin:1959] and Rajski[Rajski:1960],

 (9) (Length ofEdge AB)=DAB:=HA|B+HB|A=2HAB−HA−HB,

to construct a geometric triangle from these measurement outcomes. Here in Eq. 9, and are binary random variables derived from a joint probability distribution with . This information distance has desirable properties.

1. It is constructed to be symmetric, .

2. It obeys the triangle inequality, .

3. It is nonnegative, , and equal to when “=”.

Furthermore, if and are uncorrelated to each other then,

 (10) DAB=2(HA+HB)−HA−HB=HA+HB,

and is bounded,

 (11) 0≤DAB≤HA+HB≤2logs.

In addition to the three edge lengths of the triangle in Fig. 1, we can assign an information area that we developed earlier with Caves, Kheyfets, Lloyd, Miller, Schumacher and Wotters [QUIG:1990],

 (12) AABC:=HA|BCHB|CA+HB|CAHC|AB+HC|ABHA|BC=3H2ABC−2(HAB+HBC+HAC)HABC+(HACHBC+HABHAC+HABHBC).

This can be generalized to higher-dimensional simplexes, e.g. the information volume for a tetrahedron can be defined as,

 (13) VABCD:=HA|BCDHB|CDAHC|DAB+HB|CDAHC|DABHD|ABC  +HC|DABHD|ABCHB|CDA+HD|ABCHA|BCDHB|CDA.

For classical probability distributions these formulas are well defined and have all of the requisite symmetries, positivity, bounds and structure usually required for such formulae. In particular,

 (14) 0≤DAB≤HA+HB≤2logss-% state r.v's,

and

 (15) 0≤AABC≤3(logs)2,

with their minimum values taken when the random variables are completely correlated, and their maximum values obtained when the random variables are completely uncorrelated. We have shown that for classical probability distributions these formulas are well defined and have all requisite symmetries, positivity, bounds and structure required for such formulae [QUIG:1990]. However, for probability distributions based on measurements of quantum systems these assumptions must be weakened, and triangle inequalities may be violated. We will outline such an example in Sec. 3.

We are confident that these novel formulae, Eqns. 12-13, can provide a new characterization of quantum states and their degree of entanglement. We may need far fewer measurements than one would think to distinguish these states. In particular, the Quantum Sanov’s Theorem [Vedral:1997] shows that the fidelity of distinguishing two quantum states from improves exponentially with the number of measurements, as seen in Eq. 1. This “information thermodynamic” feature may lay credence our scalability assumption; however this requires further investigation.

## 3. Quantum State Interrogation of the Singlet State: A Review

For a bipartite quantum system we can explore the information geometry through the distance formula given in Eq. 9. One can look for a relationship between the entanglement and its geometry. Based on this approach Schumacher examined the relationship between the violation of the Bell inequality for a singlet state and the triangle inequality in information geometry [Schumacher:1991]. This is illustrated in Fig. 2. Here we review his results in detail as this is the simplest non-trivial application of this formalism. We provide many identical copies of a singlet state,

 (16) |S⟩=1√2(|↕↕⟩+|↔↔⟩),

and two observers Alice and Bob as shown in Fig. 2. Alice receives the photon propagating to the left, and Bob receives the photon traveling to the right. Alice choses randomly one of two detectors. Alice’s first detector, , is a linear polarizer rotated clockwise from the vertical state by an angle , and her second detector is rotated by an angle . Similarly, Bob’s first and second detectors are rotated by and ; respectively.

We perform this calculation for arbitrary angles and for a symmetric photon singlet state. Schumacher considered an anti-symmetric spin singlet state and used the angles , , and . When we project the state in Eq. 16 on these four detectors, we find the eight probabilities for the measurement outcomes of , , and , and they are all equally likely.

 (17) p(A1=0)=p(A1=1)=p(A2=0)=p(A2=1)=p(B1=0)=p(B1=1)=p(B2=0)=p(B2=1)=12.

We then calculate two consecutive local measurements on each pair of detectors in order to determine the four sets of conditional probabilities (-, -, - and -) In particular for the two detectors and ,

 (18) p(A1=0|B1=0)=cos2(β1−α1),p(A1=1|B1=0)=sin2(β1−α1),p(A1=0|B1=1)=sin2(β1−α1),p(A1=1|B1=1)=cos2(β1−α1),p(B1=0|A1=0)=cos2(β1−α1),p(B1=1|A1=0)=sin2(β1−α1),p(B1=0|A1=1)=sin2(β1−α1),p(B1=1|A1=1)=cos2(β1−α1).

The conditional probability expressions for and are the same as those in Eq. 18 except we must substitute . For and we modify Eq. 18 with , and for and we substitute both angles, i.e. and . The joint probabilities can be recovered from Eqs. 17-18 using Eq. 37. In this example, each joint probability is just half its conditional probability.

We are now in a position to use Eq. 3 and Eq. 6 to calculate the entropy, the conditional entropy as well as the information distance in Eq. 9. We find that the entropies are maximal and consistent with the complete uncertainty in the outcome of each measurement by or . This is reflected in Eq. 17) where

 (19) HA=HB=−12log12−12log12=1.

The joint entropies are more interesting, and can be obtained from Eq. 18 and Eq. 37,

 (20) HA1B1 =1−sin2(β1−α1)log(sin2(β1−α1))−cos2(β1−α1)log(cos2(β1−α1)), (21) HA1B2 =1−sin2(β2−α1)log(sin2(β2−α1))−cos2(β2−α1)log(cos2(β2−α1)), (22) HA2B1 =1−sin2(β1−α2)log(sin2(β1−α2))−cos2(β1−α2)log(cos2(β1−α2)), (23) HA2B2 =1−sin2(β2−α2)log(sin2(β2−α2))−cos2(β2−α2)log(cos2(β2−α2)).

We find the four lengths of the quadrilateral in the lower part of Fig. 2 using Eqs. 10, 20 and 19,

 DA1B1 = −2sin2(β1−α1)log(sin2(β1−α1))−2cos2(β1−α1)log(cos2(β1−α1)), DA1B2 = −2sin2(β2−α1)log(sin2(β2−α1))−2cos2(β2−α1)log(cos2(β2−α1)), DA2B1 = −2sin2(β1−α2)log(sin2(β1−α2))−2cos2(β1−α2)log(cos2(β1−α2)), DA2B2 = −2sin2(β2−α2)log(sin2(β2−α2))−2cos2(β2−α2)log(cos2(β2−α2)).

If the quadrilateral formed by the four detectors as illustrated in Fig. 2 was embedded in a Euclidean surface, then the direct route should always be greater than or equal to the indirect route ,

 (24) DA1B2≤DA1B1+DA2B1+DA2B2.

However, Schumacher showed that this triangle inequality is violated for certain angles [Schumacher:1991]. For our symmetric 2-photon singlet state we obtain the same violation. In particular, we find a maximal violation within a symmetric sub-space where three of the pairwise detectors have the same difference in their relative angular settings, whilst the relative angular setting between the direct connection between and is three times larger,

 (25) β1−α1=β1=α2=β2−α2≈0.15234,

and therefore,

 (26) β2−α1=3(β1−α1).

This yields a violation in the triangle inequality which Schumacher suggests is an information geometry realization of the violation of the Bell Inequality for the maximally entangled singlet state . Here,

 (27)

While we can further explore this bipartite example, it will be reported in our future work. Nevertheless, this bipartite example of information geometry motivates us to begin to explore tripartite states. We will outline an analysis of two entangled tripartite states and a seperable state in the next section.

## 4. Extending from Bipartite to Tripartite Quantum Networks

In this section we examine the information geometry of a tripartite state function — a generalization of the bipartite work of Schumacher that was discussed in the previous section [Schumacher:1991]. We will focus on the information geometry for three distinct states, one seperable quantum state and two well studied entangled states. In particular, we examine the following three states:

1. ;

2. ; and

3. .

In the next three subsections, Sec. 4.1-4.3, we examine the geometry of Fig. 1 for each of these states. Once again, we will restrict ourselves to only linear polarization measurements on the equator of the Bloch sphere. In Sec. 5, we describe a octagonal network for our tripartite system that is the analogue of the quadrilateral in Fig 2 for the bipartite system of Schumacher [Schumacher:1991].

### 4.1. Quantum State Interrogation: the |GHZ⟩ State

We analyze the information geometry of the triangle shown in Fig: 1 for the Greenberger, Horne & Zeilinger (GHZ) tripartite state,

 (28) |Ψ⟩⟹|GHZ⟩=1√2(|↕↕↕⟩+|↔↔↔⟩)

We will calculate the three edge lengths using the techniques introduced in Sec. 2 and applied in Sec: 3. We will also calculate the information area Eq. 12 for this triangle. This is something we could not do with the bipartite system in Sec. 3.

We consider three observers Alice (), Bob () and Charlie () as shown in Fig. 3. , and measure the GHZ state using their choice of detectors,

 (29) MA = (cos(α)σz+sin(α)σx)⊗I⊗I (30) MB = I⊗(cos(β)σz+sin(β)σx)⊗I (31) MC = I⊗I⊗(cos(γ)σz+sin(γ)σx);

respectively. Here the ’s are the usual Pauli matrices. The probability of measuring a photon is

 (32) p(A)=tr(M†AMAρGHZ),

where is the density matrix for the GHZ state. If the initial state was then after ’s measurement the state would be left in

 (33) |GHZA⟩=MA|GHZ⟩√⟨GHZ|M†AMA|GHZ⟩.

For the remainder of this section we will set .

The eight joint probabilities from the three measurements on this entangled state are:

 (34) p(A=1,B=1,C=1)=12cos2(β)cos2(γ),p(A=0,B=1,C=1)=12sin2(β)sin2(γ),p(A=1,B=1,C=0)=12cos2(β)sin2(γ),p(A=0,B=1,C=0)=12sin2(β)cos2(γ),p(A=1,B=0,C=1)=12sin2(β)cos2(γ),p(A=0,B=0,C=1)=12cos2(β)sin2(γ),p(A=1,B=0,C=0)=12sin2(β)sin2(γ),p(A=0,B=0,C=0)=12cos2(β)cos2(γ)

Tracing these joint probability over each observer yields the pairwise joint probabilities,

 (35) p(A=1,B=1)=12cos2(β),p(A=1,B=0)=12sin2(β),p(A=1,C=1)=12cos2(γ),p(A=1,C=0)=12sin2(γ),p(A=0,B=1)=12sin2(β),p(A=0,B=0)=12cos2(β),p(A=0,C=1)=12sin2(γ),p(A=0,C=0)=12cos2(γ),p(B=1,C=1)=12(cos2(β)cos2(γ)+sin2(β)sin2(γ)),p(B=1,C=0)=12(cos2(β)sin2(γ)+sin2(β)cos2(γ)),p(B=0,C=1)=12(sin2(β)cos2(γ)+cos2(β)sin2(γ)),p(B=0,C=0)=12(cos2(β)cos2(γ)+sin2(β)sin2(γ)).

Finally, tracing the joint probability over all pairs of observers gives us the six probabilities for the measurement outcomes of , and to be

 (36) p(A=0)=1/2,p(A=1)=1/2p(B=0)=1/2,p(B=1)=1/2p(C=0)=1/2,p(C=1)=1/2.

The pairwise conditional probabilities can be recovered from these pairwise joint probabilities since

 (37) p(A=i|B=j)=p(A=i,B=j)p(B=j).

However, since then the joint probabilities for the state are just half the conditional probabilities.

We are now in a position to use Eqs. 3-6 to calculate the entropy, the conditional entropy as well as the information distance in Eq. 9. The entropy of our observers are maximal,

 (38) HA =1, (39) HB =1, (40) HC =1,

and the joint entropy between pairs of our observers are,

 (41) HAB=1−sin2(β)log(sin2(β))−cos2(β)log(cos2(β)),HAC=1−sin2(γ)log(sin2(γ))−cos2(γ)log(cos2(γ)),HBC=1−(cos2(β)cos2(γ)+sin2(β)sin2(γ))log(cos2(β)cos2(γ)+sin2(β)sin2(γ))−(sin2(β)cos2(γ)+cos2(β)sin2(γ))log((sin2(β)cos2(γ)+cos2(β)sin2(γ))).

Finally, we use Eq. 37 to find the joint entropy of , and ,

 (42) HABC=1−cos2(β)cos2(γ)log(cos2(β)cos2(γ))−cos2(β)sin2(γ)log(cos2(β)sin2(γ))  −sin2(β)cos2(γ)log(sin2(β)cos2(γ))−sin2(β)sin2(γ)log(sin2(β)sin2(γ)).

The three lengths of the edges of the triangle for this GHZ state are derived from Eq. 9, and are

 (43) DAB = −2sin2(β)log(sin2(β))−2cos2(β)log(cos2(β)), (44) DAC = −2sin2(γ)log(sin2(γ))−2cos2(γ)log(cos2(γ)), (46) DBC = −(cos2(β)cos2(γ)+sin2(β)sin2(γ))log(cos2(β)cos2(γ)+sin2(β)sin2(γ)) −(sin2(β)cos2(γ)+cos2(β)sin2(γ))log((sin2(β)cos2(γ)+cos2(β)sin2(γ))).

To determine the area of the triangle formed by , and , we use the definition in Eq. 12. We have all the entropies and the conditional entropies we need for this calculation by using the chain rule for multiple random variables,

 (47) HABC=HA+HB|AHAB+HC|AB

to solve for . The information triangle area can be obtained by Eq, 12 using our expressions for the joint entropies. Since the information area is an involved function of the two detector angles and , we will not display this explicitly. However we evaluate it numerically, and show our results in Fig. 3. It is interesting to us that this entangled state, there is a relatively large region where the Euclidean area is close to the information area. The information area is well behaved with a local maximum at . At this particular maximum, the geometry of the triangle is and isosceles triangle and is embeddable in the Euclidean plane. Its embedded area () achieves the upper bound for the area formula in Eq. 15 and is different from the Euclidean area (). At this local minimum,

 (48) DAB = DAC=DBC=2, (49) A|GHZ⟩ABC = 3,

as illustrated in Fig. 4.

The Euclidean area plotted in the middle box of Fig. 4 is defined using Heron’s formulae,

 (50) AE=14√(DAB+DAC−DBC)(DAB−DAC+DBC)(−DAB+DAC+DBC)(DAB+DAC+DBC).

The missing sections of the domain is where the triangle inequality is violated. Eq. 50 is ideally suited to detect triangle inequality violations as the radical becomes imaginary. Perhaps, this is not so surprising since the three-tangle obtains its maximum permitted value of unity for the state [Coffman:2000]. The tangle is the square of the concurrence. We will look at the state in the next section whose three-tangle is , but whose pairwise-tangle is maximal and greater than the pairwise tangle for the state [Briegel:2001].

### 4.2. Quantum State Interrogation: the |W⟩ State

Following the last two subsections, we briefly outline the information geometry of the triangle shown in Fig: 1 for the state,

 (51) |Ψ⟩⟹|W⟩=1√3(|↕↔↔⟩+|↔↕↔⟩+|↔↔↕⟩).

We will calculate the three edge lengths using the techniques introduced in Sec. 2 and applied in Sec: 3. We will also calculate the information area Eq. 12 for this triangle. For the rest of this section we set .

Again we consider the same three observers Alice (), Bob () and Charlie (). , and measure the state with measurement operators given in Eq. 29. We also set for the remainder of the section.

The eight joint probabilities from the three measurements on this entangled state are:

 (52) p(A=1,B=1,C=1)=13sin2(β)sin2(γ),p(A=0,B=1,C=1)=13sin2(β+γ),p(A=1,B=1,C=0)=13sin2(β)cos2(γ),p(A=0,B=1,C=0)=13cos2(β+γ),p(A=1,B=0,C=1)=13cos2(β)sin2(γ),p(A=0,B=0,C=1)=13cos2(β+γ),p(A=1,B=0,C=0)=13cos2(β)cos2(γ),p(A=0,B=0,C=0)=13sin2(β+γ).

Tracing these joint probability over each observer yields the pairwise joint probabilities,

 (53) p(A=1,B=1)=13sin2(β),p(A=1,B=0)=13cos2(β),p(A=1,C=1)=13sin2(γ),p(A=1,C=0)=13cos2(γ),p(A=0,B=1)=13,p(A=0,B=0)=13,p(A=0,C=1)=13,p(A=0,C=0)=13,p(B=1,C=1)=13(sin2(β)sin2(γ)+sin2(β+γ)),p(B=1,C=0)=13(sin2(β)cos2(γ)+cos2(β+γ)),p(B=0,C=1)=13(cos2(β)sin2(γ)+cos2(β+γ)),p(B=0,C=0)=13(cos2(β)cos2(γ)+sin2(β+γ)).

Finally, tracing the joint probability over all pairs of observers gives us the six probabilities for the measurement outcomes of , and ,

 (54) p(A=0)=2/3,p(A=1)=1/3p(B=0)=2/3,p(B=1)=1/3p(C=0)=2/3,p(C=1)=1/3.

The pairwise conditional probabilities can be recovered from these pairwise joint probabilities since using Eq 37.

We are now in a position to use Eqs. 3-6 to calculate the entropy, the conditional entropy as well as the information distance in Eq. 9. The entropy of our observers are all equal,

 (55) HA=HB=HC=log(3)−23.

The joint entropy between pairs of our observers are,

 (56) HAB=log(3)−13sin2(β)log(sin2(β))−13cos2(β)log(cos2(β)),HAC=log(3)−13sin2(γ)log(sin2(γ))−13cos2(γ)log(cos2(γ)),HBC=log(3)−13(sin2(β)sin2(γ)+sin2(β+γ))log(sin2(β)sin2(γ)+sin2(β+γ))−13(sin2(β)cos2(γ)+cos2(β+γ))log(sin2(β)cos2(γ)+cos2(β+γ))−13(cos2(β)sin2(γ)+cos2(β+γ))log(cos2(β)sin2(γ)+cos2(β+γ))−13(cos2(β)cos2(γ)+sin2(β+γ))log(cos2(β)cos2(γ)+sin2(β+γ)).

Finally, we use Eq. 37 to find the joint entropy of , and

 (57) HABC=log(3)−13sin2(β)sin2(γ)log(sin2(β)sin2(γ))−13sin2(β)cos2(γ)log(sin2(β)cos2(γ))−13cos2(β)sin2(γ)log(cos2(β)sin2)−13cos2(β)cos2(γ)log(cos2(β)cos2(γ))−13sin2(β+γ)log(sin2(β+γ))−13cos2(β+γ)log(cos2(β+γ))−13cos2(β+γ)log(cos2(β+γ))−13