Riemannian geometry

# Riemannian geometry

Richard L. Bishop
July 17, 2019

Preface

These lecture notes are based on the course in Riemannian geometry at the University of Illinois over a period of many years. The material derives from the course at MIT developed by Professors Warren Ambrose and I M Singer and then reformulated in the book by Richard J. Crittenden and me, “Geometry of Manifolds”, Academic Press, 1964. That book was reprinted in 2000 in the AMS Chelsea series. These notes omit the parts on differentiable manifolds, Lie groups, and isometric imbeddings. The notes are not intended to be for individual self study; instead they depend heavily on an instructor’s guidance and the use of numerous problems with a wide range of difficulty.

The geometric structure in this treatment emphasizes the use of the bundles of bases and frames and avoids the arbitrary coordinate expressions as much as possible. However, I have added some material of historical interest, the Taylor expansion of the metric in normal coordinates which goes back to Riemann. The curvature tensor was probably discovered by its appearance in this expansion.

There are some differences in names which I believe are a substantial improvement over the fashionable ones. What is usually called a “distribution” is called a “tangent subbundle”, or “subbundle” for short. The name “solder form” never made much sense to me and is now labeled the descriptive term “universal cobasis”. The terms “first Bianchi identity” and “second Bianchi identity” are historically inaccurate and are replaced by “cyclic curvature identity” and “Bianchi identity” – Bianchi was too young to have anything to do with the first, and even labeling the second with his name is questionable since it appears in a book by him but was discovered by someone else (Ricci?).

The original proof of my Volume Theorem used Jacobi field comparisons and is not reproduced. Another informative approach is to use comparison theory for matrix Riccati equations and a discussion of how that works is included and used to prove the Rauch Comparison Theorem.

In July, 2013, I went through the whole file, correcting many typos, making minor additions, and, most importantly, redoing the index using the Latex option for that purpose.

Richard L. Bishop

University of Illinois at Urbana-Champaign

July, 2013.

## 1. Riemannian metrics

Riemannian geometry considers manifolds with the additional structure of a Riemannian metric, a type positive definite symmetric tensor field. To a first order approximation this means that a Riemannian manifold is a Euclidean space: we can measure lengths of vectors and angles between them. Immediately we can define the length of curves by the usual integral, and then the distance between points comes from the glb of lengths of curves. Thus, Riemannian manifolds become metric spaces in the topological sense.

Riemannian geometry is a subject of current mathematical research in itself. We will try to give some of the flavor of the questions being considered now and then in these notes. A Riemannian structure is also frequently used as a tool for the study of other properties of manifolds. For example, if we are given a second order linear elliptic partial differential operator, then the second-order coefficients of that operator form a Riemannian metric and it is natural to ask what the general properties of that metric tell us about the partial differential equations involving that operator. Conversely, on a Riemannian manifold there is singled out naturally such an operator (the Laplace-Beltrami operator), so that it makes sense, for example, to talk about solving the heat equation on a Riemannian manifold. The Riemannian metrics have nice properties not shared by just any topological metrics, so that in topological studies they are also used as a tool for the study of manifolds.

The generalization of Riemannian geometry to the case where the metric is not assumed to be positive definite, but merely nondegenerate, forms the basis for general relativity theory. We will not go very far in that direction, but some of the major theorems and concepts are identical in the generalization. We will be careful to point out which theorems we can prove in this more general setting. For a deeper study there is a fine book: O’Neill, Semi-Riemannian geometry, Academic Press, 1983. I recommend this book also for its concise summary of the theory of manifolds, tensors, and Riemannian geometry itself.

The first substantial question we take up is the existence of Riemannian metrics. It is interesting that we can immediately use Riemannian metrics as a tool to shed some light on the existence of semi-Riemannian metrics of nontrivial index.

###### Theorem 1.1 (Existence of Riemannian metrics).

On any smooth manifold there exists a Riemannian metric.

The key idea of the proof is that locally we always have Riemannian metrics carried over from the standard one on Cartesian space by coordinate mappings, and we can glue them together smoothly with a partition of unity. In the gluing process the property of being positive definite is preserved due to the convexity of the set of positive definite symmetric matrices. What happens for indefinite metrics? The set of nonsingular symmetric matrices of a given index is not convex, so that the existence proof breaks down. In fact, there is a condition on the manifold, which can be reduced to topological invariants, in order that a semi-Riemannian metric of index exist: there must be a subbundle of the tangent bundle of rank . When the structure is called a Lorentz structure; that is the case of interest in general relativity theory; the topological condition for a compact manifold to have a Lorentz structure is easily understood: the Euler characteristic must be .

The proof in the Lorentz case can be done by using the fact that for any simple curve there is a diffeomorphism which is the identity outside any given neighborhood of the curve and which moves one end of the curve to the other. In the compact case start with a vector field having discrete singularities. Then by choosing disjoint simple curves from these singularities to inside a fixed ball, we can obtain a diffeomorphism which moves all of them inside that ball. If the Euler characteristic is , then by the Hopf index theorem, the index of the vector field on the boundary of the ball is , so the vector field can be extended to a nonsingular vector field inside the ball.

Conversely, by the following Theorem 1.2, a Lorentz metric would give a nonsingular rank subbundle. If that field is nonorientable, pass to the double covering for which the lift of it is orientable. Then there is a nonsingular vector field which is a global basis of the line field, so the Euler characteristic is .

In the noncompact case, take a countable exhaustion of the manifold by an increasing family of compact sets. Then the singularities of a vector field can be pushed outside each of the compact sets sequentially, leaving a nonsingular vector field on the whole in the limit. Thus, every noncompact (separable) manifold has a Lorentz structure.

###### Theorem 1.2 (Existence of semi-Riemannian metrics).

A smooth manifold has a semi-Riemannian metric of index if and only if there is a subbundle of the tangent bundle of rank .

The idea of the proof is: the subbundle will be the directions in which the semi-Riemannian metrics will be negative definite. If we change the sign on the subbundle and leave it unchanged on the orthogonal complement, we will get a Riemannian metric. The construction goes both ways.

Although the idea for the proof of Theorem 1.2 is correct, there are some nontrivial technical difficulties to entertain us. One direction is relatively easy.

Proof of “if” part If has a smooth tangent subbundle of rank , then has a semi-Riemannian metric of index .

###### Definition 1.3.

A frame at a point of a semi-Riemannian manifold is a basis of the tangent space with respect to which the component matrix of the metric tensor is diagonal with ’s followed by 1’s on the diagonal. A local frame field is a local basis of vector fields which is a frame at each point of its domain.

###### Lemma 1.4 (Technical Lemma 1).

Local frames exist in a neighborhood of every point.

TL 1 is important for other purposes than the proof of the theorem at hand. For the proof of TL 1 one modifies the Gramm-Schmidt procedure, starting with a smooth local basis and shrinking the domain at each step if necessary to divide by the length for normalization.

###### Remark 1.5.

If we write , then a local frame is exactly one for which the coframe of -forms satisfies

 g=−(ϵ1)2−⋯−(ϵν)2+⋯+(ϵn)2.

The Gramm-Schmidt procedure amounts to iterated completion of squares, viewing as a homogeneous quadratic polynomial in the . The modifications needed to handle the negative signs are probably easier in this form.

###### Lemma 1.6 (Technical Lemma 2).

If is a smooth tangent subbundle of rank and is a Riemannian metric, then the tensor field and the semi-Riemannian metric given as follows are smooth:

 (1) A={−1 on V1 on V⊥.
 h(v,w)=g(Av,w).

(Their expressions in terms of smooth local frames adapted to V for are constant, hence smooth.)

Now the converse.

Proof of“only if” part If there is a semi-Riemannian metric of index , then there is a tangent subbundle of rank .

The outline of the proof goes as follows. Take a Riemannian metric . Then and are related by a tensor field as above. We know that is symmetric with respect to -frames, and has negative eigenvalues (counting multiplicities) at each point. Thus, the subspace spanned by the eigenvectors of these negative eigenvalues at each point is -dimensional. The claim is that those subspaces form a smooth subbundle, even though it may be impossible to choose the individual eigenvectors to form smooth vector fields.

###### Lemma 1.7 (Technical Lemma 3).

If is a symmetric linear operator, smoothly dependent on coordinates , and is a simple eigenvalue at the origin, then extends to a smooth simple eigenvalue function in a neighborhood of the origin having a smooth eigenvector field.

Let . Use the implicit function theorem to solve , getting . Then we can write , and any nonzero column of is an eigenvector, by the Cayley-Hamilton theorem.

###### Lemma 1.8 (Technical Lemma 4).

Suppose that is a smooth function with decomposable, nonzero values. Then locally there are smooth vector fields having wedge product equal to .

For , the interior product is always in the subspace carried by . Choose of these ’s which give linearly independent interior products with at one point; then they do so locally.

###### Lemma 1.9 (Technical Lemma 5).

If is a symmetric linear operator of index , smoothly dependent on coordinates , then the extension of to a derivation of the Grassmann algebra has a unique minimum simple eigenvalue on . The (smooth!) eigenvectors are decomposable.

###### Problem 1.10.

Generalize the result of TL’s 3, 4, 5: If there is a group of eigenvalues of which always satisfy , then the subspace spanned by their eigenvectors is smooth.

(Consider . Can be continuous functions of too?)

###### Problem 1.11.

Now order all of the eigenvalues of symmetric smooth , defining uniquely functions of . Prove that the are continuous; on the subset where , is smooth and has locally smooth eigenvector fields.

###### Problem 1.12.

Construct an example

 A=(f(x,y)g(x,y)g(x,y)h(x,y))

for which and neither nor their eigenvector fields are smooth in a neighborhood of .

###### Remark 1.13.

I have had to referee and reject two papers because the proofs were based on the assumption that eigenvector fields could be chosen smoothly. Take care!

Hard Problem. In Problem 1.12, can it be arranged so that there is no smooth eigenvector field on the set where ? If that set is simply connected, then there is a smooth vector field; and in any case there is always a smooth subbundle of rank 1. However, in the non-simply-connected case, the subbundle may be disoriented in passing around some loop.

### 1.1. Finsler Metrics

Let be as continuous function which is positive on nonzero vectors and is positive homogeneous of degree 1:

 for α∈R,v∈TM,we have L(αv)=|α|L(v).

Eventually we should also assume that the unit balls, that is , the subsets of the tangent spaces on which , are convex, but that property is not required to give the initial facts we want to look at here. In fact, it is usually assumed that is smooth and that its restriction to each tangent space has positive definite second derivative matrix (the Hessian of ) with respect to linear coordinates on that tangent space. This implies that the unit balls are strictly convex and smooth.

A function satisfying these conditions is called a Finsler metric on M. If is a Riemannian metric on M, then there is a corresponding Finsler metric, given by the norm with respect to : . We use the letter because it is treated like a Lagrangian in mechanics.

Finsler metrics were systematically studied by P. Finsler starting about 1918.

### 1.2. Length of Curves

For a piecewise curve we define the length of to be

 |γ|=∫baL(γ′(t))\/dt.

The reason for assuming that is positive homogeneous is that it makes the length of a curve independent of its parametrization. This follows easily from the change of variable formula for integrals.

Length is clearly additive with respect to chaining of curves end-to-end. Not every Finsler metric comes from a Riemannian metric. The condition for that to be true is the parallelogram law, well-known to functional analysts:

###### Theorem 1.14 (Characterization of Riemannian Finsler Metrics).

A Finsler metric is the Finsler metric of a Riemannian metric g if and only if it satisfies the parallelogram law:

 L2(v+w)+L2(v−w)=2L2(v)+2L2(w),

for all tangent vectors at all points of . When this law is satisfied, the Riemannian metric can be recovered from by the polarization identity:

 2g(v,w)=L2(v+w)−L2(v)−L2(w).

### 1.3. Distance

When we have a notion of lengths of curves satisfying the additivity with respect to chaining curves end-to-end, as we do when we have a Finsler metric, then we can define the intrinsic metric (the word metric is used here as it is in topology, a distance function) by specifying the distance from p to q to be

 d(p,q)=inf{|γ|:γis a curve from p to q}.

Generally this function is only a semi-metric, in that we could have but not The symmetry and the triangle inequality are rather easy consequences of the definition, but the nondegeneracy in the case of Finsler lengths of curves is nontrivial.

###### Theorem 1.15 (Topological Equivalence Theorem).

If is the intrinsic metric coming from a Finsler metric, then is a topological metric on and the topology given by is the same as the manifold topology.

If is not connected, then the definition gives when and are not in the same connected components. We simply allow such a value for it is a reasonable extension of the notion of a topological metric.

###### Lemma 1.16 (Nondegeneracy Lemma).

If is a coordinate map, where is a compact subset of , then there are positive constant , such that for every curve in ,

 |γ|≥A|φ∘γ|,|φ∘γ|≥B|γ|.

Here is the standard Euclidean length of

The key step in the proof is to observe that the union of the unit (with respect to the Euclidean coordinate metric) spheres at points of forms a compact subset of the tangent bundle . Since is positive on nonzero vectors and continuous on on we have a positive minimum and a finite maximum for on .

The result breaks down on infinite-dimensional manifolds modeled on a Banach space, because there the set of unit vectors will not be compact. So in that case the inequalities and must be taken as a local hypothesis on .

The nondegeneracy of uses the lemma in an obvious way, although there is a subtlety that could be overlooked: the nondegeneracy of the intrinsic metric on defined by the standard Euclidean metric must be proved independently.

Aside from the Hausdorff separation axiom, the topology of a manifold is determined as a local property by the coordinate maps on compact subsets . Since the nondegeneracy lemma tells us that there is nesting of d-balls and coordinate-balls, the topologies must coincide.

###### Problem 1.17.

Prove that the intrinsic metric on defined by the standard Euclidean metric is nondegenerate, and, in fact, coincides with the usual distance formula.

Hint: For a curve from to , split the tangent vector into components parallel to and perpendicular to . The integral of the parallel component is always at least the usual distance.

### 1.4. Length of a curve in a metric space

If we have a topological metric space with distance function and a curve , then the length of is

 |γ|=sup∑d(γ(ti),γ(ti+1)),

where the supremum is taken over all partitions of the interval . Due to the triangle inequality, the insertion of another point into the partition does not decrease the sum, so that in particular the length of a curve from to is at least . If the length is finite, we call the curve rectifiable. It is obvious from the definition that the length of a curve is independent of its parametrization and satisfies the additivity property with respect to chaining curves. Following the definition of the length of a curve, we can then define the intrinsic metric generated by , as we defined the intrinsic metric for a Finsler space. Clearly the intrinsic metric is at least as great as the metric we started with.

## 2. Minimizers

A curve whose length equals the intrinsic distance between its endpoints is called a minimizer or a shortest path. In Riemannian manifolds they are often called geodesics, but we will avoid that term for a while because there is another definition for geodesics. One of our major tasks will be to establish the relation between minimizers and geodesics. They will turn out to be not quite the same: geodesics turn out to be only locally minimizing, and furthermore, there are technical reasons why we should require that geodesics have a special parametrization, a constant-speed parametrization.

### 2.1. Existence of Minimizers

We can establish the existence of minimizers for a Finsler metric within a connected compact set by using convergence techniques developed by nineteenth century mathematicians to show the existence of solutions of ordinary differential equations by the Euler method. The main tool is the Arzelà-Ascoli Theorem. Cesare Arzelà (1847-1912) was a professor at Bologna, who established the case where the domain is a closed interval, and Giulio Ascoli (1843-1896) was a professor at Milan, who formulated the definition of uniform equicontinuity.

###### Definition 2.1.

A collection of maps from a metric space to a metric space is uniformly equicontinuous if for every , there is such that for every and every such that we have .

The word “uniform” refers to the quantification over all , just as in “uniform continuity”, while “equicontinuous” refers to the quantification over all members of the family. The definition is only significant for infinite families of functions, since a finite family of uniformly continuous functions is always also equicontinuous. Moreover, the application is usually to the case where is compact, so that uniform continuity, but not uniform equicontinuity, is automatic. If and are subsets of Euclidean spaces and the members of the family have a uniform bound on their gradient vector lengths, then the family is uniformly equicontinuous. Even if those gradients exist only piecewise, this still works, which explains why the theorem below can be used to get existence of solutions by the Euler method.

###### Theorem 2.2 (Arzelà-Ascoli Theorem).

Let be compact metric space and assume that has a countable dense subset. Let be a collection of continuous functions . Then the following properties are equivalent:

(a) is uniformly equicontinuous.

(b) Every sequence in has a subsequence which is uniformly convergent on .

There is a proof for the case where and are subsets of Euclidean spaces given in R. G. Bartle, “The Elements of Real Analysis”, 2nd Edition, Wiley, page 189. No essential changes are needed for this abstraction to metric spaces.

###### Theorem 2.3 (Local Existence of minimizers in Finsler Spaces).

If is a Finsler manifold, each has nested neighborhoods such that every pair of points in can be joined by a minimizer which is contained in .

To start the proof one takes to be a compact coordinate ball about , and a smaller coordinate ball so that, using the curve-length estimates from the Nondegeneracy Lemma, any curve which starts and ends in cannot go outside unless it has greater length than the longest coordinate straight line in . Now for two points and in we define a family of curves parametrized on the unit interval (so that of the theorem will be with the usual distance on the line). We require that the length of each curve to be no more than the Finsler length of the coordinate straight line from to . We parametrize each curve so that it has constant “speed”, which together with the uniform bound on length, gives the uniform equicontinuity of . We take to be the outer compact ball .

By the definition of intrinsic distance there exists a sequence of curves from to for which the lengths converge to the infimum of lengths. If the coordinate straight line is already minimizing, then we have our desired minimizer. Otherwise the lengths will be eventually less than that of the straight line, and from there on the sequence will be in . By the AA Theorem there must be a uniformly convergent subsequence. It is fairly easy to prove that the limit is a minimizer.

###### Remark 2.4.

The minimizers do not have to be unique, even locally. For an example consider the or norm on to define the Finsler metric (the “taxicab metric”). Generally the local uniqueness of minimizers can only be obtained by assuming that the unit tangent balls of the Finsler metric are strictly convex. We won’t do that part of the theory in such great generality, but we will obtain the local uniqueness in Riemannian manifolds by using differentiability and the calculus of variations.

The result on existence of minimizers locally can be abstracted a little more. Instead of using coordinate balls we can just assume that the space is locally compact. Thus, in a locally compact space with intrinsic metric there are always minimizers locally. The proof is essentially the same.

### 2.2. Products

If we have two metric spaces and , then the Cartesian product has a metric , whose square is . We use the same idea to get a Finsler metric or a Riemannian metric on the product when we are given those structures on the factors. When we pass to the intrinsic distance function of a Finsler product there is no surprise, the result is the product distance function.

###### Problem 2.5.

Prove that the projection into each factor of a minimizer is a minimizer. Conversely, if we handle the parametrizations correctly, then the product of two minimizers is again a minimizer.

## 3. Connections

We define an additional structure to a manifold structure, called a connection. It can be given either in terms of covariant derivative operators or in terms of a horizontal tangent subbundle on the bundle of bases. Both ways are important, so we will give both and establish the way of going back and forth. Eventually our goal is to show that on a semi-Riemannian manifold (including the Riemannian case) there is a canonical connection called the Levi-Civita connection.

### 3.1. Covariant Derivatives

For a connection in terms of covariant derivatives we give axioms for the operation , associating a vector field to a pair of vector fields .

1. If and are , then is .

2. is -linear in , for real-valued functions on . That is, and .

3. is a derivation with respect to multiplication by elements of . That is, and .

We say that is the covariant derivative of Y with respect to X. It should be thought of as an extension of the defining operation of vector fields so as to operate on vector fields as well as functions .

### 3.2. Pointwise in X Property

Axiom 2 conveys the information that for a vector , we can define . We extend to a vector field and prove, using 2, that is the same for all such extensions .

### 3.3. Localization

If and coincide on an open set, then and coincide on that open set.

### 3.4. Basis Calculations

If is a local basis of vector fields, we define -forms locally by

 DXXj=n∑i=1φij(X)Xi.

These are called the connection -forms with respect to the local basis B. If we let be the dual basis to of -forms, and arrange them in a column , and let , then we can specify the connection locally in terms of by

 DXY=B(X+φ(X))ω(Y).

### 3.5. Parallel Translation

If , we say that is infinitesimally parallel in the direction . If is a curve and for all , we say that is parallel along and that is the parallel translate of along .

###### Theorem 3.1 (Existence, Uniqueness, and Linearity of Parallel Translation).

For a given curve and a vector , there is a unique parallel translate of along to . This operation of parallel translation along , is a linear transformation.

If lies in a local basis neighborhood of , then the coefficients of along , when is parallel, satisfy a system of linear homogeneous differential equations:

 dω(Y∘γ)dt=−φ(γ′(t))ω(Y∘γ).

Globally we chain together the local parallel translations to span all of in finitely many steps, using the fact that is compact.

### 3.6. Existence of Connections

It is trivial to check that for a local basis we can set and obtain a connection in the local basis neighborhood. Then if covers , is a connection on , and is a subordinate partition of unity (we do not even have to require that , only that the sum be locally finite), then the definition

 DXY=∑αfαDαXY

defines a connection globally on .

### 3.7. An Affine Space

If and are connections, then for any we have that is again a connection. As runs through constants this gives a straight line in the collection of all connections on . Moreover, is -linear in , and so defines a (1,2) tensor field such that . Conversely, for any (1,2) tensor field is a connection. We interpret this to say:

The set of all connections on is an affine space for which the associated vector space can be identified with the space of all (1,2) tensor fields on .

### 3.8. The Induced Connection on a Curve

If is a curve in a manifold which has a connection , then we can define what we call the induced connection on the pullback of the tangent bundle to the curve. This is a means of differentiating vector fields along the curve with respect to the velocity of the curve. Thus, for each parameter value , , and will be another field, like , along . There are various ways of formulating the definition, and there is even a general theory of pulling back connections along maps (see Bishop & Goldberg, pp.220–224), but there is a simple method for curves as follows. Take a tangent space basis at some point of and parallel translate this basis along to get a parallel basis field for vector fields along . Then we can express uniquely in terms of this basis field, , where the coefficient functions are smooth functions of . We define . From the viewpoint of the theory of pulling back connections it would be more appropriate to write instead of . In fact, the Leibnitz rule for this connection on is

 Dγ′fY=dfdtY+fDγ′Y,

and we also have the strange result that even if it is possible to have . Even a field on a constant curve (which is just a curve in the tangent space of the constant value of the curve) can have nonzero covariant derivative along the curve.

### 3.9. Parallelizations

A manifold is called parallelizable if the tangent bundle is trivial as a vector bundle: . For a given trivialization the vector fields which correspond to the standard unit vectors in are called the corresponding parallelization of . Conversely, a global basis of vector fields gives a trivialization of . If we set , we get the connection of the parallelization. For this connection parallel translation depends only on the ends of the curve. The parallel fields are constants.

An even-dimensional sphere is not parallelizable.

Lie groups are parallelized by a basis of the left-invariant vector fields.

We shall see that for any manifold its bundle of bases and various frame bundles are parallelizable.

### 3.10. Torsion

If is a connection, then

 T(X,Y)=DXY−DYX−[X,Y]

defines an -linear, skew-symmetric function of pairs of vector fields, called the torsion of . Hence is a tensor field. For a local basis with dual and connection form , the torsion has -components

 Ω(X,Y)=dω(X,Y)+φ∧ω(X,Y).

The -valued -form is called the local torsion -form and its defining equation

 Ω=dω+φ∧ω

is called the first structural equation.

A connection with is said to be symmetric.

For any connection , the connection is always symmetric.

### 3.11. Curvature

For vector fields we define the curvature operator , mapping a third vector field to a fourth one by

 RXY=D[X,Y]−DXDY+DYDX.

(Some authors define this to be .) As a function of three vector fields is -linear and so defines a tensor field. However, the interpretation as a -form whose values are linear operators on the tangent space is the meaningful viewpoint. For a local basis we have local curvature -forms, with values which are matrices:

 Φ=dφ+φ∧φ,

which is the second structural equation. The sign has been switched, so that the matrix of with respect to the basis is . The wedge product is a combination of matrix multiplication and wedge product of the matrix entries, just as it was in the first structural equation, so that .

###### Problem 3.2.

Calculate the torsion tensor for the connection of a parallelization, relating it to the brackets of the basis fields.

###### Problem 3.3.

Calculate the law of change of a connection: that is, if we have a local basis and its connection form and another local basis and its connection form , find the expression for on the overlapping part of the local basis neighborhoods in terms of and . In the case of coordinate local bases the matrix of functions is a Jacobian matrix.

###### Problem 3.4.

Check that the axioms for a connection are satisfied for the connection specified by the partition of unity and local connections, in the proof of existence of connections.

###### Problem 3.5.

Verify that the definition of leads to the local expression for given by the first structural equation; that is, and are assumed to be related by .

### 3.12. The Bundle of Bases

We let

 BM={(p,x1,…,xn):p∈M, (x1,…,xn) \rm is a % basis of Mp}.

This is called the bundle of bases of M, and we make it into a manifold of dimension as follows. Locally it will be a product manifold of a neighborhood of a local basis with the general linear group consisting of all nonsingular matrices. Since the condition of nonsingularity is given by requiring the continuous function determinant to be nonzero, can be viewed as an open set in , so that it gets a manifold structure from the single coordinate map. Then if and , we make the element of the product correspond to the basis . It is routine to prove that if has a structure, then the structure defined on by using local bases is a manifold structure on .

The projection map given by is given locally by the product projection, so is a smooth map. The fiber over p is , a submanifold diffeomorphic to .

Each local basis can be thought of as a smooth map , called a cross-section of BM over U. The composition with is the identity map on .

### 3.13. The Right Action of the General Linear Group

Each matrix can be used as a change of basis matrix on every basis of every tangent space. This simultaneous change of all bases in the same way is a map , given by . For we will also write . It is called the right action of g on BM. For two elements we clearly have , that is, .

### 3.14. The Universal Dual 1-Forms

There is a column of -forms on , existing purely due to the nature of itself, an embodiment of the idea of a dual basis of the basis of a vector space. These -forms are called the universal dual -forms, are denoted , and are defined by the equation:

 π∗(v)=∑iωi(v)xi,

where is a tangent vector to at the point . We can also use multiplication of the row of basis vectors by the columns of values of the -forms to write the definition as . Thus, the projection of a tangent vector to is referred to the basis at which lives, and the coefficients are the values of these canonical -forms on . (The canonical -forms are usually called the solder forms of .)

It is a simple consequence of the definition of that if is a local basis on , then the pullback is the local dual basis of -forms. This justifies the name for .

We clearly have that , so that

 π∗∘Rg∗(v)=bgω(Rg∗(v))=bω(v),

where is a tangent at . Rubbing out the “” and “” on both sides of the equation leaves us with an equation for the action of on :

 g⋅Rg∗ω=ω,orRg∗ω=g−1ω.

### 3.15. The Vertical Subbundle

The tangent vectors to the fibers of , that is, the tangent vectors in the kernel of , form a subbundle of TBM of rank . This is called the vertical subbundle of , and is denoted by .

### 3.16. Connections

A connection on BM is a specification of a complementary subbundle to which is smooth and invariant under all right action maps . The idea is that moving in the direction of on represents a motion of a basis along a curve in which will be defined to be parallel translation of that basis along the curve. We can then parallel translate any tangent vector along the curve in by requiring that its coefficients with respect to the parallel basis field be constant. The invariance of under the right action is needed to make parallel translation of vectors be independent of the choice of initial basis.

### 3.17. Horizontal Lifts

Since is complementary to the kernel of at each point of , the restriction of to is a vector space isomorphism to . Hence we can apply the inverse to vectors and vector fields on to obtain horizontal lifts of those vectors and vector fields. We usually lift single vectors to single horizontal vectors, but for a smooth vector field on we take all the horizontal lifts of all the values of , thus obtaining a smooth vector field . These vector field lifts are compatible with and all , so that if is another vector field on , then is right invariant and can be projected to . However, we have not assumed that is involutive, so that is not generally the horizontal lift of .

The construction of the pull-back bundle and its horizontal subbundle is the bundle of bases version of the induced connection on the curve . More generally, the induced connection on a map has a bundle of bases version defined in just the same way.

We can also get horizontal lifts of smooth curves in . This is equivalent to getting the parallel translates of bases along the curve. If the curve is the integral curve of a vector field , then a horizontal lift of the curve is just an integral curve of . However, curves can have points where the velocity is , making it impossible to realize the curve even locally as the integral curve of a smooth vector field. Thus, we need to generalize the bundle construction a little to obtain horizontal lifts of arbitrary smooth curves . We define the pull-back bundle to be the collection of all bases of all tangent spaces at points , and give it a manifold with boundary structure, diffeomorphic to the product , just as we did for , along with a smooth map into . The horizontal subbundle can also be pulled back to a horizontal subbundle of rank . Then we have a vector field on whose horizontal lift has integral curves representing the desired horizontal lift of . This structure on corresponds to the connection on given above.

When we relate all of this to the other version of connections in terms of covariant derivative operators, we see that the differential equations problem for getting parallel translations has turned into the familiar problem of getting integral curves of a vector field on a different space.

### 3.18. The Fundamental Vector Fields

Corresponding to the left-invariant vector fields on we have some canonically defined vertical vector fields on ; each fiber is a copy of and these canonical fields are carried over as copies of the left invariant fields. Generally on a Lie group the left-invariant vector fields are identified with the tangent space at the identity: in one direction we simply evaluate the vector field at the identity; in the other direction, if we are given a vector at the identity as the velocity of a curve, then we can get the value at any other point of the group as the velocity of the curve at time . Note that whereas multiplies the curve on the left, which is what makes the vector field left invariant, what we see nearby is the result of multiplying on the right by . It is this process of multiplying on the right by a curve through the identity that we can imitate in the case of , since we have the right action of on . It is natural to view the tangent space of at the identity matrix as being the set of all matrices, which we denote by . If we let be the matrix with in the position and ’s elsewhere, we get a standard basis of the Lie algebra. For a curve with velocity we can take simply . The fundamental vector fields on are the vector fields defined by

 Eij(b)=γ′(0), where γ(t)=b⋅(I+tEij).

We sometimes also call the constant linear combinations of these basis fields fundamental.

### 3.19. The Connection Forms

If we are given a connection on , we define a matrix of -forms to be the forms which are dual to the vector fields on the vertical subbundle of and are on the connection subbundle . This means that if , where is an matrix, then .

Clearly the connection forms completely determine as the subbundle they annihilate. Thus, in order to give a connection it is adequate to specify the connection form . In order to say what matrices of 1-forms on determine a connection, besides the property that the restriction to the vertical gives forms dual to the fundamental fields , we have to spell out the condition that is right invariant in terms of . The name for this condition is equivariance, and things have been arranged so that it is expressed easily in terms of the differential action of and matrix operations:

 Rg∗φ=g−1φg\rm for all g∈Gl(n,R).

### 3.20. The Basic Vector Fields

The universal dual cobasis is nonzero on any nonvertical vector, so that if we restrict it to the horizontal subspace of a connection it gives an isomorphism: . If we invert this map and vary , we get the basic vector fields of the connection. In particular, using the standard basis of , we get the basic vector fields ; they are the horizontal vector fields such that .

### 3.21. The Parallelizability of BM

Since a connection always exists, we now know that is parallelizable; specifically, is a parallelization.

###### Theorem 3.6 (Existence of Connections (again)).

There exists a connection on .

Locally we have that is defined to be a product manifold. We can define a connection locally by taking the horizontal subspace to be the summand of the tangent bundle given by the product structure, complementary to the tangent spaces of . In turn this will give us some local connection forms which satisfy the equivariance condition. Then we combine these local connection forms by using a partition of unity for the covering of by the projection of their domains. (This is no different than the previous proof of existence of a connection.)

### 3.22. Relation Between the Two Definitions of Connection

If we are given a connection on and a local basis , then we can pull back the connection form on by to get a matrix of -forms on . This pullback will then be the connection form of a connection on in the sense of covariant derivatives. It requires some routine checking to see that these local connection forms all fit together, as varies, to make a global connection on in the sense of covariant derivatives.

Conversely, if we are given a connection on in the sense of covariant derivatives, then we can define on the image of a local basis by identifying it with the local connection form under the diffeomorphism. Then the extension to the rest of above the domain of is forced on us by the equivariance and the fact that is already specified on the vertical spaces. The geometric meaning of the relation between the local connection form and the form on is clear: on the image of the form measures the failure of to be horizontal; by the differential equation for parallel translation the local form measures the failure to be parallel. But “parallel on ” and “horizontal on ” are synonymous. A form on is horizontal if it gives whenever any vertical vector is taken as one of its arguments. This means that it can be expressed in terms of the with real-valued functions as coefficients. A form on with values in is equivariant if . A form on with values in is equivariant if . The significance of these definitions is that the horizontal equivariant forms on correspond to tensorial forms on : corresponds to a tangent-vector valued form on , corresponds to a form on whose values are linear transformations of the tangent space. The rules for making these correspondences are rather obvious: evaluate or on lifts to a basis of the vectors on and use the result as coefficients with respect to that basis for the result we desire on . The horizontal condition makes this independent of the choice of lifts; the equivariance makes it independent of the choice of .

For example, the form corresponds to the -form on with tangent-vector values which assigns a vector to itself.

###### Theorem 3.7 (The Structural Equations).

If is a connection form on , then there is a horizontal equivariant -valued 2-form and a horizontal equivariant -valued 2-form such that

 dω=−φ∧ω+Ω,

and

 dφ=−φ∧φ+Φ.

The structural equations have already been proved in the form of pullbacks of the terms of the equations by a local basis. This shows how the forms and are given on the image of a local basis. The fact that these local forms yield the tensors and which live independently of the local basis can be interpreted as establishing the equivariance properties of and since they are horizontal. It is also not difficult to prove the structural equations directly from the specified equivariance of . If we restrict the structural equations to vertical vectors, or one vertical and one horizontal vector, we get information that has nothing to do with connections. The first one tells how operates on . The second one is more interesting: it is the equations of Maurer-Cartan for , which are essentially its Lie algebra structure in its dual packaging.

### 3.23. The Dual Formulation

The dual to taking exterior derivatives of -forms is essentially the operation of bracketing vector fields. If we bracket two fundamental fields or a fundamental and a basic field, we obtain nothing new, only a repeat of the Lie algebra structure and its action on Euclidean space. The brackets that actually convey information about the connection are the brackets of basic vector fields. By using the exterior derivative formula , we see that the first structural equation tells what the horizontal part of a basic bracket is and the second tells us what the vertical part is. Most of the terms are :

 dω(Ei,Ej)=Eiω(Ej)−Ejω(Ei)−ω([Ei,Ej])
 =−ω([Ei,Ej])
 =Ω(Ei,Ej).

Similarly,

 −φ([Ei,Ej])=Φ(Ei,Ej).

We can immediately get some important geometrical information about a connection. The condition for the subbundle to be integrable is that the brackets of its vector fields again be within the subbundle. The basic fields are a local basis for , so the condition for to be integrable is just that curvature be . This means that locally there are horizontal submanifolds, which are local bases with a very special property: whenever we parallel translate around a small loop the result is the identity transformation; or, parallel translation is locally independent of path. The fact that setting curvature to gives this local independence of path is not very obvious from the covariant derivative viewpoint of connections. If we go one step further and impose both curvature and torsion equal , then the result is also easy to interpret from basic manifold theory applied to the fields on . Indeed, when a set of independent vector fields has all brackets vanishing, there are coordinates so that these fields are coordinate vector fields. When these are the of a connection on the coordinates correspond to coordinates on the leaves of , which get transferred down to coordinates on such that the coordinate vector fields are parallel along every curve. The geometry is exactly the same as the usual geometry of Euclidean space, at least locally.

### 3.24. Geodesics

A geodesic of a connection is a curve for which the velocity field is parallel. Hence, a geodesic is also called an autoparallel curve. Notice that the parametrization of the curve is significant, since a reparametrization of a curve can stretch or shrink the velocity by different factors at different points, which clearly destroys parallelism. (There is a trivial noncase: the constant curves are formally geodesics. But then a reparametrization does nothing.)

If we are given a point and a vector at , we can take a basis so that . Then the integral curve of starting at is a horizontal curve, so represents a parallel field of bases along its projection to . Moreover, the velocity field of is the projection of at the points of the integral curve; but the projection of always gives the first entry of . We conclude that has parallel velocity field. The steps of this argument can be reversed, so that the geodesics of are exactly the projections of integral curves of . Any other basic field could be used instead of .

Geodesics do not have to go on forever, since the field may not be complete. If is complete, so that all geodesics are extendible to all of as geodesics, then we say that is geodesically complete.

###### Theorem 3.8 (Local Existence and Uniqueness of Geodesics).

For every and there is a geodesic such that . Two such geodesics coincide in a neighborhood about . There is a maximal such geodesic, defined on an open interval, so that every other is a restriction of this maximal one.

This theorem is an immediate consequence of the same sort of statements about vector fields.

### 3.25. The Interpretation of Torsion and Curvature in Terms of Geodesics

Recall the geometric interpretation of brackets: if we move successively along the integral curves of by equal parameter amounts , we get an endpoint curve which returns to the origin up to order , but gives the bracket in its second order term. We apply this to the vector fields on . The meaning of the construction of the “small parallelogram” on is that we follow some geodesics below on , carrying along a second vector by parallel translation to tell us what geodesic we should continue on when we have reached the prescribed parameter distance . If we were to do this in Euclidean space, the parallelogram would always close up, but here the amount it fails to close up is of order and is measured by the horizontal part of the tangent to the endpoint curve in above. But we have seen that the horizontal part of that bracket is given by relative to the chosen basis. When we eliminate the dependence on the basis we conclude the following:

###### Theorem 3.9 (The Gap of a Geodesic Parallelogram).

A geodesic parallelogram generated by vectors with parameter side-lengths has an endgap equal to up to terms of order .

The other part of the gap of the bracket parallelogram on is the vertical part. What that represents geometrically is that failure of parallel translation around the geodesic parallelogram below to bring us back to the identity. That failure is what curvature measures, up to terms cubic in . We can’t quite make sense of this as it stands, because the parallel translation in question is not quite around a loop; however, if we close off the gap left due to torsion in any non-roundabout way, then the discrepancies among the various ways of closing up, as parallel translation is affected, are of higher order in . That is the interpretation we place on the following theorem.

###### Theorem 3.10 (The Holonomy of a Geodesic Parallelogram).

Parallel translation around a geodesic parallelogram generated by vectors with parameter side-lengths has second order approximation .

It seems to me that a conventional choice of sign of the curvature operators to make the “+” in the above theorem turn into a “” is in poor taste. The only other guides for which sign should be chosen seem to be merely historic. The word “holonomy” is used in connection theory to describe the failure of parallel translation to be trivial around loops. By chaining one loop after another we get the product of their holonomy transformations, so that it makes sense to talk about a holonomy group as a measure of how much the connection structure fails to be like Euclidean geometry.

###### Problem 3.11.

Decomposition of a Connection into Geodesics and Torsion. Prove that if two connections have the same geodesics and torsion they are the same. Furthermore, the geodesics and torsion can be specified independently.

In regard to the meaning of the last statement, we intend that the torsion can be any tangent-vector valued -form; the specification of what families of curves on a manifold can be the geodesics of some connection has been spelled out in an article by W. Ambrose, R.S. Palais, and I.M. Singer, Sprays, Anais da Academia Brasileira de Ciencias, vol. 32, 1960. But for the problem you are required only to show that the geodesics of a connection to be specified can be taken to be the same as some given connection.

###### Problem 3.12.

Holonomy of a Loop. Prove the following more general and precise version of the Theorem on Holonomy of Geodesic Parallelograms. Let be a smooth homotopy of the constant loop to a loop with fixed ends, so that . Fix a basis and let be the matrix which gives parallel translation of around the loop . Let and let be the lift of given by lifting each loop horizontally with initial point . In particular, . Prove that

 ∫10g(u)−1g′(u)du=∫10∫τ0¯h(u,v)−1∘RXY∘¯h(u,v)dvdu.

The meaning of the integrand on the right is as follows: We interpret a basis to be the linear isomorphism given by . Thus, for each , we have a linear map

 ¯h(u,v)−1∘RX(u,v)Y(u,v)∘¯h(u,v):Rn→Mh(u,v)→Mh(u,v)→Rn.

As a matrix this can be integrated entry-by-entry. Hint: Pullback the second structural equation via and apply Stokes’ theorem on the rectangle.

###### Problem 3.13.

The General Curvature Zero Case. Suppose that is connected and that we have a connection on for which , that is, is completely integrable. Let be a leaf of , that is, a maximal connected integrable submanifold. Show that the restriction of to is a covering map. Moreover, if we choose a base point , then we can get a homomorphism of the fundamental group as follows: for a loop based at we lift the loop into , necessarily horizontally, getting a curve in from to . Then depends only on the homotopy class of the loop.

A connection with curvature zero is called flat and the homomorphism of Problem 3.13 is called the holonomy map of that flat connection.

### 3.26. Development of Curves into the Tangent Space

Let and let be a connection on . We let be a parallel basis field along starting at and express the velocity of in terms of this parallel basis, getting a curve of velocity components . Then we let . Thus, the velocity field of in the space bears the same relation to Euclidean parallel translates of as does the velocity field of to the parallel translates of given by the connection. We call the development of into .

###### Problem 3.14.

Show that the development is independent of the choice of initial basis .

###### Problem 3.15.

Show that the lift of in the definition of the development is the integral curve of the time-dependent vector field on starting at .

### 3.27. Reverse Developments and Completeness

Starting with a curve in such that , we choose a basis and let . By Problem 3.15, we can then get a curve in whose development is , at least locally. We call the reverse development of . Since the vector field need not be complete, it is not generally true that every curve in can be reversely developed over its entire domain.

We say that is development-complete at if every smooth curve in such that has a reverse development over its entire domain.

###### Problem 3.16.

Suppose that is connected. Show that the condition that be development-complete at is independent of the choice of .

###### Problem 3.17.

Show that the development of a geodesic is a ray with linear parametrization, and hence, if is development complete, then is geodesically complete.

###### Problem 3.18.

For the connection of a parallelization show that the geodesics are integral curves of constant linear combinations .

###### Problem 3.19.

For the connection of a parallelization show that the reverse development of for which is an integral curve of .

###### Problem 3.20.

Show that if grows fast enough along a curve in , then the development of with respect to the parallelization is bounded. Hence there is no reverse development having unbounded continuation.

###### Problem 3.21.

Show that the geodesics of the parallelization of Problem 3.20 are straight lines except for parametrization, and on regions where even the parametrization is standard. Then by taking to be an unbounded curve with a neighborhood such that any straight line meets at most in a bounded set, and a function which is outside and grows rapidly along , it is possible to get a connection (of the parallelization) which is geodesically complete but not development-complete.

###### Remark 3.22.

Parallel translation along a curve is independent of parametrization.

###### Remark 3.23.

The only reparametrizations of a nonconstant geodesic which are again geodesics are the affine reparametrizations:

A curve which can be reparametrized to become a geodesic is called a pregeodesic.

###### Problem 3.24.

Show that a regular curve is a pregeodesic if and only if for some real-valued function of the parameter.

### 3.28. The Exponential Map of a Connection

For let be the geodesic starting at with initial velocity . The exponential map at p is defined by

 expp:U→Mbyexppx=γp,x(1),

where is the subset of for which is defined.

###### Proposition 3.25.

The domain of is an open star-shaped subset of . Exponential maps are smooth.

It is called the exponential map because it generalizes the matrix exponential map, and also the exponential map of a Lie group. These are obtained when the connection is taken to be the connection of the parallelization by a basis of the Lie algebra (the left-invariant vector fields or the right invariant vector fields; both give the same geodesics through the identity, namely, the one-parameter subgroups). More specifically, the multiplicative group of the complex numbers is a two-dimensional real Lie group for which the ordinary complex exponential map coincides with the one given by the invariant (under multiplication) connection. The geodesics are concentric circles, open rays, and loxodromes (exponential spirals).

### 3.29. Normal Coordinates

Since is a vector space, the tangent space is canonically identified with itself. Using this identification, it is easily seen that the tangent map of at may be considered to be the identity map. In particular, by the inverse function theorem, there is a neighborhood of on which the inverse of is a diffeomorphism. Referring to a basis gives us an isomorphism to , and the composition gives a normal coordinate map at :

 b−1∘expp−1:V→Rn.

For normal coordinates it is clear that the coordinate rays starting at the origin of correspond to the geodesic rays starting at .

###### Problem 3.26.

Suppose that torsion is and that are normal coordinates at . For any show that , and hence the operation of covariant differentiation of vector fields with respect to vectors at reduces to operating on the components of the vector fields by the vectors at .

### 3.30. Parallel Translation and Covariant Derivatives of Other Tensors

We have so far only been concerned with parallel translation and covariant derivatives of vectors and vector fields. For tensors of other types we simply reduce to the vector field case: a tensor field is parallel along a curve if the components of the tensor field are constant with respect to a choice of parallel basis field along . We calculate , where is a tensor field and by taking a curve with , referring to a parallel basis along , and differentiating components at . These definitions are independent of the choice of basis.

## 4. The Riemannian Connection

### 4.1. Metric Connections

We now return to the study of semi-Riemannian metrics, and in particular, Riemannian metrics. If is such a metric, then we say that a connection is a metric connection if parallel translation along any curve preserves inner products with respect to . It is easy to see that there are several equivalent ways of expressing that same condition:

Equivalent to a connection being metric are:

1. The parallel translation of a frame is always a frame.

2. The tensor field is parallel along every curve.

3. for all tangent vectors .

4. for all tangent vectors and all vector fields and .

5. Let be the frame bundle of , consisting of all frames at all points of . It is easily shown that is a submanifold of of dimension . The condition equivalent to a connection being metric is that at points of the horizontal subspaces are contained in .

### 4.2. Orthogonal Groups

The frame bundle is invariant under the action of the orthogonal group with the corresponding index . This is the group of linear transformations which leaves invariant the standard bilinear form on of that index:

 gν(x,y)=n−ν∑i=1xiyi−n∑i=n−ν+1xiyi.

Thus, if and only if for all . In case this reduces to the familiar condition for orthogonality: . For other indices the matrix transpose should be replaced by the adjoint with respect to , which we will denote by . The corresponding Lie algebra consists of the matrices which are skew-adjoint:

 so(n,ν)={A:A′=−A}.

No matter what is, the dimension of is , and since is locally a product of open sets (the domains of local frames!) in times , has dimension .

### 4.3. Existence of Metric Connections

In general, the definition and existence of a connection on a principal bundle (this means that the fiber is a Lie group acted on the right by a model fiber) can be carried out by imitating the case of . However, for we can obtain a connection by restricting a connection on and then retaining only the skew-adjoint part. Thus, if is a connection form on , then for we let

 φa(x)=12(φ(x)−φ(x)′).

For the Riemannian case this means that we decompose the matrix into its symmetric and skew-symmetric parts and discard the symmetric part. Since the decomposition into these parts is invariant under the action of the orthogonal group by conjugation (similarity transform), it follows that the form on satisfies the equivariance condition:

 Rg∗