Geometric Shifts and Positroid Varieties

Geometric Shifts and Positroid Varieties

GEOMETRIC SHIFTS AND POSITROID VARIETIESNicolas FordDoctor of Philosophy Mathematics 2014 Associate Professor David E. Speyer, Chair
Associate Professor Henriette Elvang
Professor Sergey Fomin
Professor Thomas Lam
Professor Karen E. Smith



Chapter 1 Introduction

Consider a point on the Grassmannian of -planes in . The matroid of is defined to be the set of Plücker coordinates that are nonzero at , and a matroid variety is the closure of the set of points on with a particular matroid. Many enumerative problems on the Grassmannian can be described in terms of matroid varieties; the Schubert varieties that form the usual basis for the cohomology ring of are an especially well-behaved special case.

So if there were a good way to take a matroid and produce the cohomology class of the corresponding matroid variety, we would have a systematic way of solving any enumerative problem on the Grassmannian that can be described purely in terms of the vanishing or nonvanishing of Plücker coordinates. Unfortunately, in a sense that will be made more precise later, matroid varieties are very poorly behaved, and there is essentially no way to solve the problem just described in a reasonable and efficient manner: there is a strictly easier problem that is known to be NP-hard.

One might expect to have more success with a slightly more modest goal: find a well-behaved class of matroids for which the class of the corresponding matroid variety can be described nicely. There is a class of matroids called positroids which are a natural candidate: the corresponding matroid varieties, called positroid varieties are geometrically much better behaved, and indeed some work toward the goal of describing their cohomology classes has already been done. On the one hand, in [10] there is a description of the cohomology class of a positroid variety in terms of a symmetric function called an “affine Stanley symmetric function.” When the cohomology ring of the Grassmannian is expressed as a quotient of the ring of symmetric functions, the affine Stanley function maps to the class of the corresponding positroid variety.

It would be nice, though, if there were a way to do the computation directly in the cohomology ring: the symmetric function description involves a mysterious change of basis that introduces several minus signs that all disappear after taking the quotient. What would be preferable would be a more combinatorial description that computes, from the positroid, the coefficient of each Schubert class in its cohomology class. Indeed, for an even more restricted class of matroid varieties called interval rank varieties, there is a description that does exactly that. Given an interval rank variety, there is a sequence of degenerations in the Grassmannian that take it to a union of Schubert varieties, and there is a combinatorial procedure for keeping track of what happens along the way.

This thesis is in two parts. The first is an account of an attempt to extend the procedure that worked for interval rank varieties to positroids. We will see that, with a lot of arm-twisting, it can be made to work when (Theorem 5.2), but when it fails (Counterexample 5.1). In the second part, we have more success at an even more modest goal: it describes a new way to estimate just the codimension of a matroid variety purely in terms of the combinatorics of the matroid itself. The resulting number is not always the actual codimension, but we prove that in the case of positroids, it is. This work, which appears in Section 6, also appears in a paper [6], which has been submitted to the Journal of Algebraic Combinatorics.

We write for the set , and for any set we write for the power set of and for the set of all -element subsets of . will always stand for the Grassmannian of -planes in . For , we write for corresponding Plücker coordinate on ; that is, thinking of elements of as being represented by matrices, is the determinant of the minor whose columns correspond to the elements of . All varieties are over .

Fix a complete flag . One may define a Schubert variety from a partition consisting of parts . We define to be the set of all such that, for each ,

The corresponding cohomology classes are called Schubert classes, written . The Schubert classes corresponding to partitions that fit in a box — that is, partitions with at most parts each of which is  — form a basis for the cohomology ring of the Grassmannian. When a basis is chosen for , we often take the flag to be the one corresponding to that basis. In this case, taking the Schubert variety corresponding to the flag given by taking the basis in reverse order produces what we’ll call an opposite Schubert variety.

We write for the ring of symmetric polynomials in variables. The well-known Schur polynomials corresponding to partitions with at most parts give a basis for . There is a surjective map which takes each Schur polynomial to ; its kernel is spanned by the corresponding to partitions with a part larger than . Proofs of every claim in this and the preceding paragraph can be found in [11, Ch. 3].

We will at some points have use for the -equivariant cohomology ring , where is the variety consisting of all matrices. This ring is isomorphic to itself. Restricting to the subvariety of full-rank matrices causes the action of to be free, making the corresponding equivariant cohomology ring isomorphic to the cohomology ring of . The restriction map

is the same as the map described in the previous paragraph.

To see this, we first note that there is a -equivariant contraction of , which allows us to just consider . This, by Section 1 of [1], is the copy of sitting inside , so it’s enough to compute the -equivariant cohomology class of the matrix Schubert varieties. This is done in Section 3 of [1].

Chapter 2 Matroids and Matroid Varieties

2.1 Matroids

We are going to be investigating subvarieties of Grassmannians defined by combinatorial objects called matroids. There are many equivalent definitions of matroids, all useful in different contexts, and we are only going to mention two of them here. There is a lot of literature on the combinatorial theory of matroids. A good place to start might be [16].

From our perspective, the purpose of a matroid is, given a collection of vectors in a vector space, to combinatorially capture the information about the linear relations among the vectors. We will consider two equivalent ways to do this. Details of these and other axiomatizations of matroids can be found in [16, pp. 298–312].

Definition 2.1.

A matroid may be specified in terms of its bases. According to this definition, a matroid is a finite set together with a collection of subsets . (An element of is called a basis.) We require:

  • is not empty.

  • No element of contains another.

  • For and , there is some so that .

Note that this is enough to force all bases to have the same number of elements.

Definition 2.2.

Suppose we have a finite-dimensional vector space , a finite set , and a function whose image spans . We can put a matroid structure on by taking to be the collection of all subsets of which map injectively to a basis of . (The reason for this funny definition is that we’d like to be able to take the same element of more than once; otherwise could just be a subset of . We will hardly ever be very careful about the difference between an element and its image .) It’s an easy exercise in linear algebra to show that this definition satisfies the axioms above. Matroids which arise in this way are called realizable.

It will sometimes be convenient to be able to draw a picture of a realizable matroid by drawing the images of the elements of in the projective space and the linear subspaces they lie on. We will call these projective models. For example, Figure 2.1 shows a projective model of the rank-3 matroid on in which and are the only three-element sets that are not bases.

Figure 2.1: A projective model of the “V” matroid.

The following terminology will be useful to have as we talk about matroids, so we present it here. Most (but not all) of the terminology mirrors the corresponding terminology from linear algebra in the realizable case, which should make those terms easier to remember. Throughout this discussion, unless otherwise specified, is a matroid on the set .

Definitions 2.3.
  1. A set which is contained in a basis is called independent. Any other set is dependent.

  2. For , the rank of , written , is the size of the largest independent set contained in . Note that is the same as the size of any basis. We will sometimes write .

  3. For a set and an element , we say that is in the closure of , written , if . Note that, as the name suggests, closure is idempotent and inclusion-preserving. Sets which are their own closures are called flats.

    In the realizable case, the flats are the intersections of subspaces of with .

  4. A set which contains a basis is called a spanning set. Equivalently, spans if , or if .

  5. If , we say is a loop. Equivalently, is not in any basis, or , or is in every flat.

    In the realizable case, loops are elements of which map to the zero vector in .

  6. If , we say is a coloop. Equivalently, is in every basis.

  7. If , we say that and are parallel. Equivalently, any flat which contains one of or also contains the other.

  8. Given a set , we can put a matroid structure on by saying is a basis of if it is maximal among independent sets contained in . Note that this preserves the independence and dependence of subsets of , and therefore also the ranks of subsets of . This operation is called restricting to or deleting , and we call the resulting matroid or .

    In the realizable case, this corresponds to restricting the function to and replacing with the span of the image of .

  9. There is a second way to put a matroid structure on called the contraction, and written . The rank function on the contraction is .

It will also be convenient to note that matroids can be defined just by listing the axioms that have to be satisfied by the rank function defined above:

Definition 2.4.

A matroid may be specified in terms of the ranks of all its subsets. According to this definition, a matroid is a finite set together with a function satisfying:

  • .

  • is either or .

  • If , then .

Note that this is enough to force the useful inequality .

Given the same data we had to define a realizable matroid before — a set with a function to a vector space  — we can get a rank function on by setting .

We have already mentioned how to turn a collection of bases into a rank function. To go the other way, we can say is a basis if it is minimal among sets of maximal rank. One can check that these two correspondences make the two definitions given here equivalent. We will not distinguish between them as we go forward.

One more definition will be useful in later parts of this paper:

Definition 2.5.

A pseudo-rank function on a set is a function satisfying the first two criteria in Definition 2.4.

Take some collection of subsets of and some function . The pseudo-rank function generated by is the pointwise-largest pseudo-rank function on which agrees with on the sets in , if such a pseudo-rank function exists. (Note that, since the pointwise max of two pseudo-rank functions is still a pseudo-rank function, this is well-defined if it exists.) If this pseudo-rank function happens to be the rank function of a matroid , we will sometimes say that is “generated by the rank conditions given by ” or that is the matroid “defined by imposing the rank conditions given by .”

For example, the matroid on defined by imposing the conditions that has rank 1 and has rank 3 will give all one-element sets rank 1, all two-element sets other than rank 2, and both three-element sets other than and rank 3. Its bases are and .

2.2 Matroid Varieties

As mentioned above, the main objects of study in this paper are certain subvarieties of Grassmannians which can be described in terms of matroids.

Construction 2.6.

Consider the Grassmannian , which we’ll think of as the set of matrices of full rank modulo the obvious action of . When one builds the Grassmannian in this way, one ordinarily considers the rows of the matrix as elements of , and the action of corresponds to automorphisms of the span of those elements, so that we are left with a variety that parametrizes the -planes in .

We will think about our matrices the other way. Given a matrix of full rank, consider the function which takes to the ’th column of our matrix. We can then use Definition 2.2 to put a matroid structure on . Since the action of clearly doesn’t change which matroid we get, we have assigned a matroid in a consistent way to every point of the Grassmannian. The Plücker coordinate corresponding to some is given by the determinant of the submatrix defined by taking the columns in . So vanishes precisely when these columns fail to span , that is, precisely when fails to be a basis of our matroid.

Given a matroid of rank on , the open matroid variety is the subset of consisting of all points whose matroid is . This is a locally closed subvariety of : it is defined by taking all Plücker coordinates corresponding to bases of to be nonzero and all the other Plücker coordinates to be zero. The closure of is called the matroid variety . We can define a matroid variety inside in the same way. We write for the open matroid variety in and for its closure. The open matroid variety in doesn’t intersect the subvariety of matrices of less than full rank, but its closure will.

The reader who is familiar with the definition of Schubert varieties may be tempted to ignore part of the definition above and take to be the subvariety of defined by setting all the Plücker coordinates corresponding to nonbases of to zero. Sadly, this is not the same as the definition given above.

Counterexample 2.7.

Consider the rank-3 matroid on generated by the conditions that , , and have rank 2. The variety is not cut out by the ideal . That ideal cuts out two components: and the variety of the matroid in which 7 is a loop. The ideal of is actually

Matroid varieties can be used to encode any number of enumerative geometry problems. For example, Schubert varieties are matroid varieties, as are intersections of Schubert varieties and opposite Schubert varieties. Of course, there are many more matroid varieties than just these, so coming up with a way to find the cohomology class of a matroid variety would enable one to solve a much larger set of combinatorial problems about linear arrangements of points. Since the multiplication rule for Schubert classes is well-known, it would be enough to come up with an algorithm that takes in a matroid and outputs its class as a linear combination of Schubert classes.

In general, matroid varieties are under no obligation to be geometrically well-behaved. They don’t have to be irreducible, equidimensional, normal, or even generically reduced. It is too much to hope that we might find an algorithm that can efficiently produce the class of an arbitrary matroid variety: even the problem of determining whether a given matroid variety is empty or not is NP-hard [13]. To make progress on this question, it will be necessary to be more modest in our goals and only look for the classes of certain well-behaved matroid varieties. It is to that task that the rest of this paper is dedicated.

2.3 Operations on Matroids and Matroid Varieties

We first establish some results which describe the effects of some simple operations on the cohomology class of a matroid. We will use these later in the paper to help us describe the classes of a few matroid varieties.

Definitions 2.8.
  1. Let be a matroid on and be a matroid on . The direct sum of and is the matroid on defined by

  2. If is a matroid, the loop extension of is the matroid formed by taking the direct sum of with the unique matroid of rank 0 on the one-element set , so that the new element is a loop.

  3. The coloop extension of is the matroid formed by taking the direct sum of with the unique matroid of rank 1 on , so that is a coloop.

It’s straightforward to compute the cohomology class of given the classes of and .

Proposition 2.9.

Given a matroid of rank on and of rank on , we can think of them both as matroids on in the natural way: puts conditions on the points in and puts conditions on the points in . Interpreted in this way, in .


This result is easier to see in , which is enough to prove the statement for the Grassmannian as well. In fact, is the transverse intersection of and . The tangent space of at any point is naturally itself. At any point of , the tangent space of contains the span of the columns in , and the tangent space of contains the span of the columns in , so together they span the entirety of . ∎

Corollary 2.10.

Let be a matroid variety. Then the class of in is , and the class of in is .∎

The next definition is a bit more complicated than the ones that precede it. It gives a way to add a new element to a matroid that is similar to the coloop extension except that it doesn’t increase the total rank.

Definition 2.11.

Let be a matroid of rank on a set . The free extension of by is the matroid on that we get by adding a new, unconstrained element in a way that does not increase the total rank. Its rank function is defined by

The class of a free extension can also be described in terms of the class of the original matroid. This result will be much easier to state and deal with if we work in instead of .

Proposition 2.12.

If is a matroid, the equivariant cohomology classes of in and of in are represented by the same symmetric function.


The map that takes to is the pullback in -equivariant cohomology along the projection map that kills the last column. But both and are contractible, so both of their equivariant cohomology rings are canonically isomorphic to , and therefore, after identifying the three cohomology rings, is just the identity on . ∎

Definition 2.13.

If can’t be written as a direct sum in a nontrivial way, we say that is connected. If we write with each connected, then the ’s are uniquely determined, and we call them the connected components of .

There are two other, equivalent ways to define connectedness [16, pp. 108-110]:

  • is connected if there is no proper, nonempty subset for which .

  • A circuit of is a minimal dependent set, that is, a dependent set for which every proper subset is independent. We can define an equivalence relation on by saying is equivalent to if either or there is a circuit of containing both and . The connected components of are the equivalence classes under this relation.

Definition 2.14.

Let be a matroid of rank on a set , with . The dual of is the matroid on whose bases are exactly the complements of bases of .

The rank of a set in works out to be . In particular has rank .

Proposition 2.15.

Consider the map defined by taking to and extending linearly. Then .


In fact, is the map on cohomology induced by the isomorphism that takes each Plücker coordinate to . Some set is a basis for if and only if is a basis for , so we see that this isomorphism takes to . ∎

It is important to note that restriction and contraction are dual to each other. That is, , and .

Chapter 3 Interval Rank Varieties and the Geometric Shift

We mentioned earlier that intersections of Schubert varieties and opposite Schubert varieties are a special case of matroid varieties. These are called Richardson varieties. Because Schuberts and opposite Schuberts are transverse, a Richardson variety is a representative of the product of the cohomology classes of the Schuberts used to construct it. So finding an algorithm for expressing the cohomology class of a Richardson variety in terms of Schuberts is the same as finding a way to multiply Schubert classes.

Such an algorithm is called a Littlewood-Richardson rule, and, as mentioned in the introduction, there are already many different Littlewood-Richardson rules that can be described by lots of different types of combinatorial gadgets. The rule that we are going to look at in detail here is the one first described by Ravi Vakil in [15] and later in different language and more generality by Allen Knutson in [8]. Their approach has a distinct advantage over other Littlewood-Richardson rules in that it can be described purely geometrically. That is, Vakil and Knutson start with a Richardson variety, perform a specific sequence of degenerations, and end up with Schubert varieties at the end. By understanding the varieties that show up in the middle of the sequence of process and how they behave under this operation, one can read off the coefficient of some Schubert class simply by counting how many times the corresponding Schubert variety appears at the end of this process.

What will interest us about this rule is the fact that it does more than just provide a way to multiply Schubert classes. The matroid varieties that appear in the middle of the sequence of degenerations are called “interval rank varieties,” and even though this was not the original aim, their procedure ends up providing a way to find the cohomology class of an arbitrary interval rank variety. Our goal in this paper, which we will only partially accomplish, will be to generalize their degeneration procedure to find the classes of a larger collection of matroid varieties.

We will start by describing the degeneration procedure used in their rule:

Definition 3.1.

Given a closed subset , the geometric shift from to of is the variety

where is the matrix whose only nonzero entry is a 1 in row and column . That is, take the set of points in , take the closure inside , and let be the fiber over .

By the definition of rational equivalence, and are equal in the Chow ring of . In general, even if is irreducible, its shifts might have multiple components. The Littlewood-Richardson rule we’re examining works by taking a Richardson variety and performing a prescribed sequence of geometric shifts. Every time we have multiple irreducible components, we will perform the remaining shifts on each component separately, and at the end of this process everything will have become a Schubert variety.

So it’s enough to understand what happens to our varieties when we do a geometric shift. The general question, even just for matroid varieties, will prove to be very difficult, and we’ll see in Counterexample 5.1 that, in general, a geometric shift of a matroid variety doesn’t have to be a matroid variety. However, we will be able to understand enough about the behavior of geometric shifts to make our Littlewood-Richardson rule work. We will start with a simple case:

Proposition 3.2.

Take and , and let be the matroid on generated (in the sense of Definition 2.5) by imposing the condition that has rank . If or , then . Otherwise, , where is the matroid in which has rank and there are no other conditions.


This is [8, 6.1]. ∎

When one performs a geometric shift on a Richardson variety, the result is almost never another Richardson variety. By picking our shifts carefully, though, we are able to make sure we stay inside a larger but still well-behaved class of matroid varieties:

Definition 3.3.

An interval is a subset of of the form .

Definition 3.4.

An interval rank matroid is a matroid which is generated by putting rank conditions on intervals. An interval rank variety is the matroid variety of an interval rank matroid.

Note that Schubert and Richardson varieties are themselves interval rank varieties. It’s not true that all shifts of interval rank varieties are still interval rank varieties. The strategy is to instead find a specific sequence of shifts that always works. That is, starting with an interval rank variety, we perform a specific geometric shift which gives us a reduced union of different interval rank varieties of the same dimension. Then we can take each of those components and repeat the process until we only have Schubert varieties left.

Lemma 3.5.

Suppose is an interval rank matroid of rank , defined by rank conditions of the form “ has rank ” for intervals . Suppose further that, for some , there is a unique such that and , and that is an interval. If is the matroid generated only by the condition on and is the matroid generated by all the rank conditions except the one on , then

as schemes.


This is proved in Section 5.3 of [9], which is forthcoming. ∎

From here, two tasks remain. We need to guarantee the existence of a pair as in Lemma 3.5 for any interval rank variety, and we need to figure out what varieties we get as the components of the intersection that occurs on the last line. Both of these tasks will be accomplished by the same combinatorial object:

Definition 3.6.

An interval rank matrix of rank is an upper triangular matrix of nonnegative integers with the following properties:

  1. .

  2. and are both either or . (Here we take to be 0 whenever .)

  3. If ==, then .

There is a one-to-one correspondence between interval rank matroids of rank on and interval rank matrices of rank . In fact, given an interval rank matrix, imposing the condition on the Grassmannian that (that is, just setting the corresponding Plückers equal to 0) is enough to define the corresponding interval rank variety, even as a scheme ([8, 1.8]).

Many of the conditions in the interval rank matrix are redundant. For example, if we know that , we don’t need to also say that or . So it’s possible to eliminate the condition on whenever or , and similarly for changing . Chasing through the definitions, one can see that this is equivalent to the following:

Definition 3.7.

Let be an interval rank matrix of rank . We can use to define a partial permutation matrix (that is, a matrix whose entries are all 0 except for at most one 1 in each row and column) by putting a 1 in position if and only if .

Take the partial permutation associated to an interval rank matrix, and “cross out” every position strictly below or to the left of a 1, and every empty row and empty column. We are left with some intact entries in the matrix, called the diagram of the partial permutation. Some interval is called essential if is in the upper-right corner of a connected component of the diagram.

Proposition 3.8.

If is an interval rank matroid, the rank conditions coming from the essential set define as scheme.


This is [8, 2.3]. ∎

This gives us a good way to choose our interval: we can find the essential set of our interval rank matroid and take, among all essential intervals starting after 1 which are tied for the rightmost right endpoint, the one with the rightmost left endpoint. If this interval is , then the shift will clearly always satisfy the hypotheses of Lemma 3.5.

The last task is to describe what the result of this shift is. After we perform the shift, we will be left with a bunch of rank conditions on intervals inside . We can take the pseudo-rank function generated by these conditions and try to use them to fill out an interval rank matrix. In general, we will fail: what we get won’t always be an interval rank matrix because it won’t satisfy property 3 in Definition 3.6. So we need to figure out how to split up the subscheme of we get by imposing these new conditions into irreducible components. This turns out to be surprisingly straightforward.

Lemma 3.9.

The intersection of interval rank varieties is a reduced union of interval rank varieties.


This is [8, 2.2]. ∎

Proposition 3.10.

Let be an interval rank matrix, possibly not satisfying property 3 in Definition 3.6. Suppose

appears somewhere in the middle of . If is the rank matrix with that block replaced by

and is the rank matrix with that block replaced by

then the scheme defined by is the union of the schemes defined by and .


As sets, this is clear. The statement about subschemes then follows from Lemma 3.9. ∎

So now we have our procedure. Starting with a Richardson variety — or indeed any interval rank variety at all — we can perform a sequence of shifts, splitting the results up according to the prescription in Proposition 3.10 whenever we have more than one irreducible component. If we arrive at a point where no more shifting can be done, we must be in a position where every essential interval starts at 1, that is, we must have a Schubert variety. And every shift we do either decreases the sum of the left and right endpoints of all essential intervals (if the shift is irreducible) or expresses the cohomology class we’re interested in as a nontrivial sum of smaller cohomology classes (if the shift isn’t irreducible), and neither of these can continue forever. So this process must terminate eventually, and when it does we are left with a bunch of Schubert varieties with respect to the standard basis of .

Chapter 4 Positroid Varieties

In addition to the fact that they fill in the gap between Richardson and Schubert varieties in the right way, interval rank varieties have several nice geometric properties. They share these properties with a larger class of matroid varieties, which we will describe in this section.

Definition 4.1.

A cyclic interval is a subset of which can be written as a cyclic permutation applied to an interval.

Proposition 4.2.

If is a matroid of rank on , the following are equivalent:

  1. is generated by rank conditions on cyclic intervals.

  2. is the matroid of an -point of the Grassmannian for which every Plücker coordinate is nonnegative.

  3. is the image of a Richardson variety in the flag variety under the natural projection map .


For the equivalence of (1) and (2), see [12]. For (2) and (3), see [10]. ∎

Definition 4.3.

A matroid satisfying any of the equivalent conditions just listed is called a positroid, and its matroid variety is called a positroid variety.

Several nice properties of positroid varieties are described in [7]. In particular, they are always reduced, irreducible, and Cohen-Macaulay, and unlike general matroid varieties (see Counterexample 2.7) they are always cut out by Plücker variables. Positroids are very well-studied already, and there are several different combinatorial gadgets that have been invented to describe them, some of which are described in [10].

Similarly to how we handled interval rank varieties, we can describe a positroid by saying what the rank of every cyclic interval is. We wind up with something analogous to Definition 3.6.

Definition 4.4.

Take a positroid on . We’ll think of elements as representatives of equivalence classes of integers mod with the obvious cyclic order (that is, 1 comes right after ). We’ll use interval notation with this in mind; for example, if , then we’ll write . In particular, , whereas . We can form a cyclic rank matrix by setting for any integers with .

The conditions in Definiton 3.6 are again necessary and sufficient for a rank matrix to have arisen from this procedure. We can also replicate the essential set machinery in this setting:

Definition 4.5.

We can form an affine permutation matrix from our cyclic rank matrix using the same condition we used for interval rank varieties: put a 1 in position if and a 0 otherwise. Unlike in the interval rank case, every row and every column will have exactly one 1.

The essential set is also defined exactly as before: cross out all the positions strictly below or to the left of a 1 in the partial permutation matrix, and take the positions which are at the upper-right corners of their connected components.

By convention, we don’t consider positions on the very upper-right edge of the matrix (that is, ones where ) to be essential. Again, imposing the rank conditions corresponding to the essential intervals are enough to define a positroid variety in as a scheme.

Example 4.6.

The positroid of rank 3 on generated by forcing , , and to have rank 2 has the following cyclic rank matrix:

We think of the matrix as repeating indefinitely in the northwest and southeast directions. So, for example, the 3 printed in the fourth row and fourth nonempty column indicates that has rank 3. The affine permutation corresponding to this rank matrix is the one with 1’s in the spots marked in underline. We’ll sometimes write affine permutations as functions, listing the images of each element of in order. So, for example, this one is .

As we have already mentioned, finding the cohomology class of an arbitrary matroid variety is probably an impossible task. But given how nice positroid varieties are, it seems much more reasonable that there might be a nice way to describe their classes. There is a sense in which this has already been done in [10]: the authors of that paper give a procedure which takes a positroid and outputs a symmetric function which represents the class of in the cohomology ring of . But the symmetric function they give (called the “affine Stanley symmetric function”) is not always a nonnegative linear combination of Schur functions. Instead, when expanded in the Schur basis all the minus signs happen to appear only in front of Schur functions which map to zero in .

It would be good to instead have a “positive” rule, like we had for interval rank varieties, that is, a rule that takes in a positroid and simply outputs the coefficients of the Schuberts in its cohomology class without having to go through the computationally opaque step of finding a representative for a symmetric function modulo an ideal.

Perhaps we could proceed in a way similar to our strategy for interval rank varieties: find a shift that always works, do it to get a union of positroid varieties, and continue until we have only Schuberts. The rest of this paper is dedicated to exploring the extent to which this plan can be successful.

Chapter 5 Complications with Shifting Positroids

5.1 The Square Positroid

Note that the choice of which sequence of shifts to perform is much less clear in this case. Since the elements of an interval rank matroid had a natural linear order, it made sense to start at one end and perform the shortest and rightmost possible shifts in order until the end. In a positroid, of course, there is no “rightmost,” so any such choice would be more arbitrary than in the interval rank case.

At any rate, it doesn’t matter: there is a positroid variety in , which will be defined in a moment, for which no nontrivial shift is a reduced union of positroid varieties. So if we are going to use geometric shifts to find the class of a positroid variety, it won’t be as easy as it was for interval rank varieties.

Figure 5.1: A projective model of the “square positroid.”
Counterexample 5.1.

The square positroid is a rank-3 positroid on . Its essential intervals are , , , and , each of which has rank 2. A picture of a projective model of is shown in Figure 5.1.

The picture makes it clear that this matroid has symmetry. Table 5.1 is a catalog of the results of performing every possible geometric shift on up to symmetry. Each entry in the table is a list of the irreducible components of the corresponding shift. In this table and in the other similar tables later in this paper, we will describe a matroid by listing the nonredundant conditions on its rank function; each row of the table corresponds to one connected component. We will write, for example to mean that the set has rank 2. The rank of any set not listed is understood to be the largest possible number that doesn’t violate the condition . In this way, itself could be described by the conditions

shift result
no change
not a matroid variety, described in the text
Table 5.1: The results of the possible geometric shifts of up to symmetry.

Some features of the table are worth pointing out explicitly. First, except for the shift , which does nothing, for none of the shifts is it the case that every irreducible component is a positroid variety. This means that the naive goal of just replacing the words “interval rank variety” with “positroid variety” everywhere in Section 3 and hoping to find the right shift order cannot possibly work — there is no way to stay entirely inside the world of positroid varieties using geometric shifts.

Also worth discussing is the fifth component of the shift in the table, which we’ll call . As indicated there, is not a matroid variety. Its ideal in the coordinate ring of the Grassmannian is generated by all the Plücker coordinates that contain 1 (which exactly forces 1 to be a loop) along with the single cubic binomial

This ideal is prime, and the matroid of a generic point of is simply the matroid in which 1 is a loop.

The preceding counterexample does much to dash our hopes of a geometric-shift-based way to find the cohomology class of a positroid variety. However, at least in rank 3, all is not lost. Look at the entry in Table 5.1 for the shift . The second component listed is definitely a positroid variety, since all the conditions are on cyclic intervals. The first component is not: one of the conditions is on the set . But notice that the number 4 doesn’t appear in that section of the table; there are no conditions placed on that element of the matroid at all. And if 4 is deleted from the matroid, becomes a cyclic interval and we have a positroid again.

So while the first component of the shift isn’t a positroid variety, it is the variety of a free extension of a positroid (see Definition 2.11). From a certain perspective, it’s not so strange that it worked out this way. In the original description of , 4 appears in only one of the essential rank conditions — the one on . This is exactly the sort of situation that made us happy when we were working with interval rank varieties.

5.2 Partial Success in Rank 3

In fact, for positroids in rank 3 we can always arrange for this to happen. Since we will be making use of free extensions and Proposition 2.12, it will be simpler to work in the equivariant cohomology ring for this computation. The geometric shift is defined in exactly the same way this context, and we can extract results about by restricting classes inside to the open subvariety consisting of full-rank matrices, and using the fact that

Theorem 5.2.

If is a positroid of rank 3, there is a geometric shift we can apply to which, up to cohomology, results in a reduced union of positroid varieties and free extensions of positroid varieties.

Before we prove this, we first analyze the essential set of a positroid of rank 3. The essential conditions will, of course, be on sets of rank 0, 1, or 2. By repeatedly using Corollary 2.10, we may reduce to the case where has no loops, that is, there are no nonempty subsets of rank 0.

So all essential conditions must be on sets of rank 1 or 2. Conditions of rank 1 (that is, conditions which force elements of the matroid to be parallel) put an equivalence relation on , and any essential rank-2 interval must be a union of these equivalence classes. This is because if some interval has rank 2 and some interval has rank 1, then there are two nonisomorphic ways to resolve this: either has rank 2 or has rank 0. (One can deduce this formally by repeatedly applying Definition 4.5.) This is a problem unless is empty or , that is, contains the entire equivalence class or none of it.

Furthermore, the intersection of two essential intervals must have rank at most 1: if and have rank 2, and , then either or . But in the latter case, and wouldn’t have been essential intervals, because the rank conditions on and would be implied by the condition on . This means that, provided and intersect in an interval (which must happen unless ), consists of a single equivalence class.

Figure 5.2: An example of a “model polygon” of a positroid

So we can think of a projective model of as a polygon, which we will call the model polygon, with points drawn at all the vertices and at various points along the edges. Each edge of the polygon is a rank-2 essential interval and each point is a rank-1 essential interval (which may consist of multiple parallel elements of ). We’ll make use of this description throughout the section.

For example, Figure 5.2 shows a picture of a model polygon of a positroid. In this example, ; there are four edges (rank-2 essential intervals): , , , and ; and there are three rank-1 essential intervals: , , and .

We can immediately eliminate the case where there are only two rank-2 essential intervals which intersect on both ends. This happens, for example, in the rank-3 positroid on in which and are the only essential intervals, each of rank 2. At every point of the corresponding open positroid variety, 1 and 4 are parallel, and something similar happens in general. Suppose with appearing cyclically consecutively. The case we’re interested in is where our rank-2 essential sets are, say, and . But by the logic from two paragraphs ago, this means that, on the open set of full-rank matrices, has rank 1. So we could reorder our points in the order , and our positroid turns out to actually be an interval rank matroid.

To prove Theorem 5.2, we’ll need some lemmas:

Lemma 5.3.

Let . Suppose is a free extension of a positroid of rank 3 in which all rank conditions are on cyclic intervals in , and in which and are each only in one essential interval, which has rank 2. Let be a matroid with the same conditions, except having rank conditions on cyclic intervals in . Then and intersect generically transversely in .

The proof of this is somewhat involved, and it will be helpful to establish some more lemmas beforehand.

Lemma 5.4.

Suppose and satisfy the hypotheses of Lemma 5.3. Then the generic point of is a nonsingular point of both and .


Since and each appear in at most two essential intervals, each of rank 2, at a generic point of this intersection, the ’th and ’th columns of the matrix are nonzero. And the dimension of the tangent space of is the same at any point at which the ’th and ’th columns are nonzero, since there is an automorphism interchanging any two such points, so each such point is a nonsingular point of . ∎

Lemma 5.5.

Under the hypotheses of Lemma 5.3, let , and pick a tangent vector at in . Then there is a tangent vector to which agrees with in the columns in .


First, if one or both of or is a free extension of a positroid, then there is a column which is free in both and . So it’s enough if we can get any tangent vector at in the projection of the tangent space away from that column, since we can then fix it in any way we want without leaving either or . So it’s enough if we handle the case where and are actually positroids.

We think of as a -point of , that is, a matrix with entries in . We’re looking for another such matrix which lies in . Like , should map to after setting to 0, and it should agree with on the columns in .

We will determine the entries of column by column. For , write , , and for the ’th columns of , , and respectively. As we mentioned above, at a generic point of the intersection, we can assume that and are nonzero. First, imposes no conditions on the columns in except and themselves, so we are free to set those columns of to whatever we like without leaving . So we simply need to verify that there is a matrix with the right entries in columns and that lies in and still maps to .

Following the “model polygon” description of rank-3 positroids from above, let be the edge (that is, rank-2 essential condition) containing and let be the edge containing . Let be the rightmost (that is, further from ) element of so that isn’t a multiple of , or set if there is no such element. Similarly, either let be the leftmost element of with not a multiple of or set .

If , then we can construct by making the columns in agree with those of , as we must, and letting columns and be as in , that is, have 0 as their coefficient of . We similarly let any columns outside of be as in . By construction, and span the columns in . Given an with , say . We set . After doing this, we see we have accomplished our goal: all the essential rank conditions defining have been met by , and by construction, maps to after killing .

Finally, if , we can do the same thing, except that since and , we can’t force those columns to agree with . But in this case, the entire interval is a single essential rank-2 interval, so we can simply set and equal to and and handle the interior columns as in the preceding paragraph. ∎

Proof of Lemma 5.3.

Take a generic point of . By Lemma 5.4, this is a nonsingular point of both and . The tangent space at a point of is isomorphic to , so if is a generic element of , it’s enough to show that the tangent spaces of and at together span all of . We’ll show that the projection of the tangent space at of to the subspace of spanned by the columns in is surjective. This, together with the symmetrical claim about and , is enough to prove the result. Take a tangent vector at our intersection point. By Lemma 5.5, there is a tangent vector to which agrees with it in the columns in . But since imposes no conditions on the columns in , we can set the columns in to whatever we want by adding a vector tangent to without disturbing anything else. ∎

Lemma 5.6.

For any -invariant equidimensional subvariety , if has components contained in , then we claim its equivariant cohomology class when expanded in the Schur basis will contain some terms corresponding to partitions with more than columns.


First, all the coefficients in the Schur basis are nonnegative: if some appears with a negative coefficient, then free-extend enough times so that is at least the number of columns in . Then removing and passing to the Grassmannian gives us a negative coefficient in the Schur basis there, which can’t happen. So if there is a component of contained in , then it maps to 0 after passing to the Grassmannian, so its class lies in the kernel of that map, that is, it’s a nonnegative sum of terms corresponding to partitions with more than columns. ∎

Lemma 5.7.

Suppose the model polygon of doesn’t have two edges which overlap on both ends, and that at least two of the corners don’t have any parallel points. Then is generically equal to the variety cut out in by the rank conditions corresponding to its essential intervals.


Let be the subvariety of cut out by the rank conditions corresponding to the essential intervals of . Since we know in , and therefore in , it’s enough to check that has no components contained in .

Let be the subvarieties corresponding to each of the rank conditions on the edges of the model polygon, except with only one element of each rank-1 essential interval included. In the example in Figure 5.2, would be 4, and the corresponding sets could be , , , and . Let be the ones corresponding to the rank-1 essential conditions. In the example, and the sets are , , and .

We know from Lemma 5.3 that the intersection is generically transverse, and we claim the same is true of each . To see this, suppose is forcing to be parallel, and that is the one with a condition imposed on it by the ’s. Then at any point of the intersection, columns are free in the tangent space to , all the other columns are free to vary in the tangent space to , so together their tangent spaces span.


We then have , and we claim that they are all equidimensional of the same dimension. The fact that follows either from direct computation or from the expected codimension machinery developed in the next section. We know that is equidimensional (in fact, irreducible) from the positroid machinery, and is equidimensional because it’s the generically transverse intersection of Schuberts. Finally, the hypothesis on the corners of the model polygon of allow us to conclude via Lemma 5.3 that is the generically transverse intersection of two interval rank varieties.

So it is sufficient to show has no components contained in . By Lemma 5.6, this is the same as showing that no terms of correspond to partitions with more than columns. But we can compute the class of directly. From its description as a transverse intersection, its class is

where is the number of points on edge of the model polygon, and is the number of points in the ’th rank-1 essential interval. By the Pieri rule, the number of columns in each term of this is at most

Some simple counting shows that

If is an interval rank matroid, this follows directly from that theory, so we may assume that the model polygon of is closed, with as many corners as edges. Since we’ve eliminated the case of two edges by hypothesis, the number of edges is at least 3. All together, this says that when we expand the class of in the Schur basis, each term corresponds to a partition with at most columns, which gives the result. ∎

Lemma 5.8.

For any matroid on in which the point is not a loop, write for the matroid on obtained by taking the free extension of in which the new element is and adding the condition that is parallel to . (We call this a parallel extension of .) Fix . Suppose that for some matroids . Then is not a loop in any , and


For a subvariety , write for the variety

in , and write for its closure in . Then by definition is the fiber over of .