Testing Core Membership in Public Goods Economies
Abstract
This paper develops a recent line of economic theory seeking to understand public goods economies using methods of topological analysis. Our first main result is a very clean characterization of the economy’s core (the standard solution concept in public goods). Specifically, we prove that a point is in the core iff it is Pareto efficient, individually rational, and the set of points it dominates is path connected.
While this structural theorem has a few interesting implications in economic theory, the main focus of the second part of this paper is on a particular algorithmic application that demonstrates its utility. Since the 1960s, economists have looked for an efficient computational process that decides whether or not a given point is in the core. All known algorithms so far run in exponential time (except in some artificially restricted settings). By heavily exploiting our new structure, we propose a new algorithm for testing core membership whose computational bottleneck is the solution of convex optimization problems on the utility function governing the economy. It is fairly natural to assume that convex optimization should be feasible, as it is needed even for very basic economic computational tasks such as testing Pareto efficiency. Nevertheless, even without this assumption, our work implies for the first time that core membership can be efficiently tested on (e.g.) utility functions that admit “nice” analytic expressions, or that appropriately defined approximate versions of the problem are tractable (by using modern blackbox approximate convex optimization algorithms).
J.4 Social and Behavioral Sciences
1 Introduction
1.1 Background on Public Goods Economics
A basic question in economics is to understand the forces governing the production of public goods. A good is public if its use by one person does not reduce its availability to others, and if none are excluded from using the good. Examples include public parks, research information, a clean environment, national defense, radio broadcasts, and so on.
Public goods economies were first explicitly abstracted in a classic paper by Samuelson in 1954 [28], and have since become central objects of study for economists. An important feature of public goods economies is that they are not well modeled by the individualistic “bestresponse dynamics” which govern familiar economic equilibrium concepts such as the Nash or Walrasian equilibrium. Rather, public goods typically arise as the result of a communal process – e.g. negotiations, treaties, or taxes – that allow the cost of production to be amortized over all agents that stand to benefit from the good. Accordingly, public goods economics inspired the development of cooperative game theory, which seeks to understand these cooperative dynamics and when an agreement to produce public goods is “stable” or when it is doomed to fall apart.
What, exactly, does “stability” mean in this context? The most standard notion is coalitional stability, which is given as follows:
Definition (Informal).
Let be an outcome in a public goods economy with agents (i.e. describes the amount of work each agent contributes to produce a public good). We say that is a deviation on for a nonempty coalition of agents if no agents in perform work (i.e. for all ) and all agents in prefer to . If there is no deviation on for any coalition, then we say is coalitionally stable. The set of coalitionally stable outcomes is called the core of the economy.
1.2 (Non)Algorithmic Properties of Public Goods Economies
An inherent conceptual drawback of coalitional stability is its exponentialsize definition. In other words, for an outcome to be coalitionally stable, every single one of the possible coalitions of agents must not have a deviation. Hence, the naive algorithms for testing the coalitional stability of must perform computations for all coalitions, and so they suffer exponential runtime. Coalitional stability is not a very convincing solution concept if its implicit notion of “instability” assumes that agents can quickly make exponential time computations in order to find deviations whenever they exist.
This problem has led economists studying public goods to seek more clever methods for solving computational problems related to the core of a public goods economy, which avoid this exponential behavior. Some of the initial work in this vein attacked the closelyrelated problem of simply outputting any core outcome. The first such solution appeared in the 1960s, when Scarf [29] proved that “balanced” games have nonempty cores, by means of an (exponential time) algorithm that outputs a core point and provably always terminates. Similar results were proved in the setting of public goods economies by Chander and Tulkens [8] and Elliot and Golub [15]. Meanwhile, followup work has suggested that the slow runtime of Scarf’s algorithm may be inherent to the problem: Kintali et al [23] showed that Scarf’s algorithm cannot be improved to polynomial runtime (unless P = PPAD), and Deng and Papadimitriou [12] showed that it is NP Complete just to detect whether or not the core is empty (let alone find a point in the core), even in the simple class of graphical games, and even when Scarf’s assumption of “balance” is dropped. Other notable hardness results in this vein have come from Conitzer and Sandholm [10] and Greco et al [21].
There has also been considerable prior work on the “membership testing” problem of determining whether a point taken on input is in the core (this is the question addressed in this paper). Deng and Papadimitriou’s work [12] also implies that membership testing is NP hard even in the restricted setting of graphical games, although there are straightforward efficient algorithms in the further restricted setting where the game is superadditive. Conitzer and Sandholm [9] showed that membership testing is coNP complete in games where coalition values have a “multipleissue” representation in polynomial space. Faigle et al [16] showed that the problem is NP complete in a variant of graphical games where payoffs are given by minimum spanning trees of subgraphs. Sung and Dimitrov [31] showed coNP completeness for membership testing in “hedonic coalition formation games.” Goemans and Skutella showed NP completeness for both emptiness and membership testing in “facility location games,” and gave formulations of these problems as LP relaxations [19]. There has been work on games defined by marginal contribution nets (MCnets) [22, 14], in which values attainable by coalitions are determined by succinct logical formulae. Li and Conitzer [25] studied emptiness testing and membership testing under various classes of formulae, and obtained various algorithms or NP hardness results depending on the complexity of the formulae allowed.
1.3 Our Results
A recently popular trend in public goods research has been to model economies as networks, and then seek to analyze the economy by studying the topological properties of the underlying “benefits network,” describing the ability of agents to transfer utility to each other at any given point (see for example [3, 5, 1, 2, 15]).
Much of the initial work focused on Nash equilibria [3, 5, 1, 2] of the economy.
A major stride was recently taken by Elliot and Golub [15], who extended the theory to show that the Lindahl equilibria
Our first main result achieves this goal. We show that the core can be characterized as follows: {theorem} Let be an outcome in a public goods economy and let be the set of points that no agent prefers to . Then is in the core if and only if it is Pareto efficient, individually rational, and is path connected. (Here, the definition of path connectedness is the standard topological one: for any two points , there is a continuous function satisfying and for all . The image of under is called a path.)
An interesting consequence of this theorem is a precise description of the relationship between Lindahl equilibria and core outcomes: {theorem} Assuming that the utility function is differentiable, the Lindahl equilibria of a public goods economy are precisely the core points whose core membership can be certified using only local information. The proof of this corollary is essentially immediate by combining Theorem 1.3 with a more technical phrasing of Elliot and Golub’s result.
While we believe that these two structural theorems hold intrinsic interest, the second half of this paper is intended to demonstrate their power by an application to the algorithmic problem discussed earlier. We have previously suggested the intuition that the algorithmic core membership testing problem is hard because the naive algorithms must check an exponential number of coalitions for a potential deviation. However, Theorem 1.3 lets us avoid this bruteforce behavior: after checking for Pareto efficiency and individual rationality (which is quite easy), we are left only with the task of checking whether or not is pathconnected. The complexity of this task is nonobvious, but we show that it can be done fairly efficiently, yielding the following result:
Given an outcome in an agent public goods economy, there is an algorithm that decides whether or not is in the core of a public goods economy. The computational bottleneck in this algorithm is the solution of convex programming problems on the utility function of the economy. Hence we essentially have the first polynomialtime tester for coalitional stability in an unrestricted public goods economy, up to the implementation of the necessary convex programming oracle. It is fairly natural in our economic metaphor to assume that convex programming should be tractable: it corresponds to the negotiation process of a group of agents trying to determine how well they can maximize a joint utility function as a group. If even this is impossible, then it is essentially hopeless to efficiently test core membership. One cannot even test the more basic property of Pareto efficiency, a necessary step towards testing core membership, without assuming some computational power along these lines.
Even so, if one does not wish to introduce such assumptions, Theorem 1.3 implies that several broad special cases of membership testing have efficient algorithms. The most obvious of these is when the utility function and its derivative can be described by a “nice” analytic function on which the standard derivativebased method for exact convex optimization goes through. Less obviously, if convex optimization really is hard for the given utility function (or the utility function of the economy is unknown), one can employ modern approximate convex optimization solvers, which treat the utility function as a black box that can be queried, to solve certain natural approximate relaxations of the core membership testing problem. We discuss this point in the conclusion of the paper, since it is easier to be specific here once the economic model is familiar.
We consider it somewhat surprising that these algorithmic results are possible, given a general dearth of positive results in the area. Moreover, the approach taken by the algorithm is fairly intuitive and seems to plausibly reflect practical behavior. Starting with the grand coalition, we show (via Theorem 1.3) that we can either determine that the current coalition has a deviation, or we can identify a “leastvaluable player” who is formally the least likely agent to participate in a deviation. We then kill this agent and repeat the analysis on the survivors. After rounds, we have either killed every agent (and thus determined that the given point is coalitionally stable), or we have explicitly found a surviving coalition with a deviation. It is quite reasonable to imagine that a practical search for a deviating coalition might employ a “greedy” method of iteratively killing the agent who seems to be least pulling their weight at the current agreement; an insight of Theorem 1.3 is that this search heuristic is in fact thorough and will provably produce the right answer.
1.4 Comparison with Prior Work.
Elliot and Golub [15] recently studied the Lindahl equilibria in public goods economies, with a focus on characterizing the set of solutions rather than algorithmically computing/testing them. More specifically, they frame the typical model of public goods economies in the language of networks, and use this to equate the eigenvectors of the “benefits network” with the Lindahl equilibria of the economy. A less general version of this networks interpretation was implicitly used in several other papers concerning Nash equilibria of public goods economies, for example [3, 5, 1, 2]. In this paper, we will adopt the more general networksbased phrasing of public goods economies used by Elliot and Golub, and we will rely on this insight in a critical way to prove our main results.
Per the discussion above, there has been lots of prior work on the algorithmic properties of the core, largely intended to confirm/refute the bounded rationality argument in some economic model. Three questions are commonly studied:

The membership testing problem (discussed above): is a given outcome in the core of the game?

The emptiness testing problem: is the core empty?

The member finding problem: output any solution in the core of the game (if nonempty).
We remark that the latter two problems are already closed in public goods economies: Elliot and Golub [15] show that the core is never empty except in certain degenerate cases, and it can be seen from the model below that the member finding problem is essentially identical to the general problem of convex optimization (which is well beyond the scope of this economicallyminded research program). Hence, this work is entirely focused on the membership testing problem.
In order to frame these three questions as proper computational problems, past work has commonly defined a “compressed” cooperative game that allows the payoffs achievable by all possible coalitions to be expressed on only input bits. For example, in a seminal paper by Deng and Papadimitriou introducing this line of research [12], the authors studied graphical games in which weighted edges are placed between agents and the value attainable by a coalition is equal to the total weight contained in its induced subgraph. Upper and lower bounds are often obtained for these problems by exploiting particular features of the compression scheme. By contrast, our goal is to assume as little structure for the problem as possible (since our main results are upper bounds, this is the more general approach). Thus, we allow the economy to be governed by an arbitrarily complex utility function, which does not need to have a succinct representation, or even any algorithmic representation at all. Instead, we allow ourselves blackbox constanttime query access to the utility function, which acts as an oracle and thus may have arbitrary complexity. The goal in this substantial generalization is to ensure that our results reveal structure of the core itself, rather than the nature of an assumed compression.
2 The Model and Basic Definitions
2.1 Notation Conventions
Given vectors , we will use the following (partial) ordering operations:

means that for all ,

means that for all , and

means that for all , and for some .
Given a subset and a vector , we write to denote the restriction of to the indices in ; that is, is the length vector built by deleting the entry from for each .
We use as shorthands for the vectors respectively.
2.2 Economic Model
We adopt the terminology of Elliot and Golub [15] when possible. The salient pieces of our economy are defined as follows:

The set of agents in the economy is given by . A nonempty subset of agents in the economy is called a coalition. The coalition is called the grand coalition.

Each agent chooses an action , which can be any real number in the interval . An outcome or point is a vector built by concatenating the actions of all agents.

There is a continuous utility function , which maps outcomes to a level of “utility” for each agent. In particular, agent prefers outcome to outcome iff . The utility function has the following two properties:

Positive Externalities: whenever with , we have . This assumption is what places us in the setting of public goods economies; intuitively, it states that an agent gains utility when other agents increase their production of public goods.

Convex Preferences: we assume that is concave.
^{3} That is, for any outcomes and any , we have . This standard assumption corresponds to the economic principle of diminishing marginal returns.

2.3 Game Theory Definitions
We recap some wellknown definitions from the game theory literature.
[Pareto Efficiency] An outcome is a Pareto Improvement on another outcome if . An outcome is Pareto Efficient if there is no Pareto improvement on . The set of Pareto efficient outcomes is called the Pareto Frontier.
The main solution concept that will be discussed in this paper is the core: {definition} [Deviation] Given an outcome , an outcome is a deviation from for a coalition if and .
[The Core] An outcome is in the core of the economy if no coalition has a deviation from (equivalently, is coalitionally stable).
The next definition that will be useful in our proofs is the projected economy: {definition} Given an economy described by agents and a utility function , the projected economy for a coalition is the economy described by agents and utility function , where
In other words, the new dimensional utility function is obtained by fixing the actions of at , allowing any action for , and then using the old utility function to determine the utilities for in the natural way. We suppress the superscript when clear from context.
The dominated set of , denoted , is defined as:
In other words, is the set of points that no agent prefers to . Note that this is an unusually weak definition of dominance, in the sense that (for example) contains itself.
3 A Topological Characterization of the Core
Our goal in this section is to prove the following structural theorem:
Let be an outcome in a public goods economy. Then is in the core if and only if it is Pareto efficient, individually rational, and is path connected.
The vast majority of the technical depth of this theorem is tied up in the implication
The remainder of this forwards implication ( is in the core neither the grand coalition nor any singleton coalition has a deviation from ) is extremely straightforward: is individually rational iff each agent prefers it to the outcome they can guarantee acting alone, which coincides with the notion that the singleton coalition has no deviation from . In our model, Pareto efficiency coincides with the notion that the grand coalition has no deviation from :
Claim 1.
Let be an outcome. If there is an outcome satisfying () and , then there is an outcome satisfying () and .
Proof.
We will prove the claim for the case ; the case follows from a symmetric argument.
Choose an agent for whom , and then slightly increase . Since is continuous, if we increase by a sufficiently small amount then we still have . Additionally, by positive externalities we then have . We can then slightly increase the actions of all agents, such that , but with sufficiently small increases we do not destroy the property that . ∎
In this section, we will first give a complete proof of the (easier) backwards implication of Theorem 3, and then we sketch the proof of the forwards implication. Due to space constraints, a full proof of the forwards implication can be found in Appendix A.
3.1 Backwards Implication of Theorem 3
First: {lemma} If is Pareto efficient, then every deviation from satisfies .
Proof.
Let be the set of agents for which , and suppose towards a contradiction that is nonempty.
Consider the point defined such that and . We then have by positive externalities, since these points differ only in that the (nonempty) coalition has increased their actions. We also have , where the first inequality follows from positive externalities (since these points differ only in that the coalition has weakly increased their action), and the second follows from the fact that is a deviation from for a coalition with (since ).
We thus have , which contradicts the fact that is Pareto efficient. Thus is empty and the lemma follows. ∎
Second:
Suppose there is a path with endpoints such that for any we have . If is a deviation from for some coalition satisfying , then also must satisfy .
Proof.
We walk along from towards until we find the first point with for some . If we reach before we find any such point , it follows that or for all , and so , as claimed. Otherwise, we find such a point , and we argue towards a contradiction.
We have , since but . By construction we then have . Since , by positive externalities we then have . Since we have , and since is a deviation for , this implies . We then have , which contradicts the assumption that . Therefore no such point may be found. ∎
We can now show:
Proof of Theorem 3, Backwards Implication.
Assume that is robust to deviations by the grand coalition or any singleton coalition, and that is path connected. Our goal is now to show that is in the core.
By Claim 1, the property that the grand coalition has no deviation from implies that is Pareto efficient. Thus, by Lemma 3.1 any deviation from satisfies . Since no singleton coalition has a deviation from we have , and since is path connected there is a path contained in with endpoints . Thus, by Lemma 3.1, we further have that a deviation must satisfy . Since no such point exists, it follows that no deviations from exist, and so is a core outcome. ∎
3.2 Sketch of Forwards Implication of Theorem 3
We will denote by the onesided directional derivative of at in the direction . In other words:
A nontrivial but standard fact from analysis is that, since is concave and welldefined everywhere, this limit is welldefined for all , except when excluded by a boundary condition (e.g. if but for some agent ) – see [24].
Our key lemma is: {lemma} At any outcome , exactly one of the following three conditions holds:

There exist directions such that and ,

There exist directions such that and , or

There exist directions such that and .
The three categories of Lemma 3.2 carry a useful geometric intuition. Specifically: {lemma} The points in the second category of Lemma 3.2 are precisely the Pareto Frontier. The proofs of these two lemmas are quite technical, and can be found in Appendix A. With these in mind, we define {definition} We will say that a point in the first category of Lemma 3.2 is below the Pareto Frontier, and a point in the third category of Lemma 3.2 is above the Pareto Frontier (and by Lemma 3.2, the second category of points in Lemma 3.2 are on the Pareto Frontier). The geometric intuition behind this definition is that, starting from a point in the first category, one can continuously follow the gradient to eventually obtain a Pareto efficient Pareto improvement (we do not prove this fact formally; it is perhaps useful intuition but not essential to our main results). Similarly, starting from in the third category, we can continuously follow the gradient to obtain a Pareto efficient Pareto improvement .
We then show:
Proof Sketch of Theorem 3, Forwards Implication (Full proof in Appendix A).
Suppose is a core outcome, and our goal is to show path connectedness of . First, we note that , since otherwise a singleton coalition can deviate from (and is in the core, so no such deviation is possible). To show path connectedness of , we consider an arbitrary point and construct a path in from to , thus implying that any two such points have a connecting path in via .
We show the existence of the path with a careful repeated application of Lemma 3.2. Informally speaking, we progressively slide a little bit closer to while maintaining the property . If we ever hit for some agent , then we restrict our attention to the projected economy discluding agent and continue. If we eventually exclude all agents in this manner, then we have and the process is complete. Otherwise, suppose towards a contradiction that at some , we cannot slide any closer to while maintaining . We make two observations here: (1) must be above the Pareto frontier (else we could slide in the appropriate direction ) and so it belongs to the third; and (2) for all agents still being considered, we have (else, by the positive externalities assumption, we can unilaterally decrease the action of agent without destroying ). Hence, by moving slightly in the direction (which improves the utility of all agents being considered), we have for all agents being considered, and so the new is a deviation from . Since we have assumed that is a core outcome, this is a contradiction, and so the process of sliding towards can never get stuck in this way. ∎
3.3 Connection to Lindahl Equilibria
Before proceeding towards our algorithm, we take a brief detour in this subsection to observe an interesting implication of Theorem 3 that helps illustrate its broader appeal. Elliot and Golub [15] show the following result: {theorem} [[15]] The Lindahl equilibria of a public goods economy with a differentiable utility function are precisely the outcomes for which . They phrase this theorem in different language related to the “benefits network” of the economy, but this formulation will suit our purposes better. We refer the reader to their paper for a more indepth discussion of the economic role of the Lindahl equilibria.
Combining Theorem 3.3 with our machinery for Theorem 3, we obtain: {theorem} In a public goods economy with a differentiable utility function, the Lindahl equilibria are precisely the core outcomes whose membership can be certified by examining only local information at . The proof of this theorem will use Theorem 3.3 as a black box, and so we will not actually need to appeal to the formal definition of the Lindahl equilibria in its proof.
Proof.
If is a Lindahl equilibrium, then by Theorem 3.3 we have (the first equality comes from the assumption that is differentiable). We claim that any satisfying is in the core, and thus its core membership can be verified by examining only these local derivatives. First, note that is Pareto efficient by Lemma 3.2. Therefore, by Lemma 3.1, any possible deviation from satisfies . Now let be the line segment from to . By the assumption of concavity and the fact that , we have for all . Thus, by Lemma 3.1, we have and so cannot exist and is a core outcome.
Now suppose that is not a Lindahl equilibrium, and so . If we have , then we have (by differentiability) which implies that is not Pareto efficient, and hence is not a core outcome. On the other hand, suppose we have . In this case, it is impossible to distinguish from the utility function that is affinelinear everywhere and agrees with at using solely local information. Note that is not in the core of the economy defined by , since we have for whichever agent satisfies . Thus, if is in fact in the core of the economy defined by , we will need to inspect nonlocal information about the economy to differentiate from . ∎
4 Algorithm for Testing Core Membership
Our main algorithmic result is: {theorem} Given an outcome in a public goods economy, there is an algorithm (in the realRAM model) that decides whether or not is in the core by solving convex optimization problems and using additional computation time.
The algorithm is fairly straightforward. We maintain an “active coalition” throughout, as well as a proof that any agent must play action in any deviation from . It is thus safe to assume that any deviating coalition satisfies . Initially , so this invariant is trivially satisfied. After each round, we either find a deviation for from , or we remove one new agent from . Thus, if we make it rounds without finding a deviation, then we have and so no deviation from is possible.
4.1 Preprocessing: Confirm Pareto Efficiency of
Before starting the main algorithm, we run the following two programs, with the purpose of testing whether or not is Pareto efficient.
Program 1.
Choose to maximize
Subj. to
Program 2.
Choose to maximize
Subj. to
Note that the concavity of the optimized function is immediate from the concavity of .
By Lemma 3.2, we may immediately conclude that is not Pareto efficient (and thus not in the core) iff either of these programs optimizes at a point satisfying .
Otherwise, we proceed with the knowledge that is Pareto efficient.
A key advantage of this is that, by Lemma 3.1, we may now restrict our search for a deviation to the bounded box .
This opens up the ability to use convex programming algorithms, which typically require bounded domains, in the remainder of the algorithm.
4.2 Main Loop: Shrinking
Each of the rounds of the algorithm consists of three steps. First, we restrict our attention to the projected economy for the coalition . Second, we run the following program:
Program 3.
Choose to maximize
Subj. to
Let be a maximizing point of Program 3.
We have:
{lemma}
Either or , and is Pareto efficient.
Proof.
First we argue Pareto efficiency. If is a Pareto improvement on , then by Claim 1 there is another point with . This would be a superior maximizing point for Program 3, so there can be no Pareto improvement on .
Next, let and . If , then (by the same argument used in Claim 1) we can again obtain a superior maximizing point by slightly increasing the action of agent from . Thus we have , and it follows that either or . ∎
In the former case where , it follows that is a deviation from for the coalition , so we may halt the algorithm. Otherwise, we have . We then observe:
If , then in the full (nonprojected) economy, any deviation from for a coalition satisfies .
Proof.
The deviation satisfies and . It follows that is also a deviation for from in the projected economy for . The claim is then immediate from Lemma 3.1. ∎
One step remains. We run:
Program 4.
Choose to maximize
Subj. to
Let be a point that maximizes Program 4. We have {lemma}
The proof is very similar to the proof of Lemma 3, so we omit it for now. We then finally have: {lemma} Let . Then any deviation from has .
Proof.
By Lemma 4.2, we have . Let be the line segment starting at , extending in the direction until a point is reached where for some agent ; note that this will specifically be . By concavity, all satisfies . Noting once again that is a deviation for from in the projected economy for , it follows from Lemma 3.1 that , and so . ∎
With this in mind, the final step in the loop is to delete from and repeat. After repetitions, we have , so we may halt the algorithm and report that is in the core.
4.3 Algorithm Pseudocode
To recap the algorithm, which has been interspersed with proofs of correctness above, we give full pseudocode here.
Let Let \If or \Return“ is not in the core”;
\While \If \Return“ is not in the core”; \Return“ is in the core”;
4.4 Conclusion: Adapting the Algorithm for Approximate Optimization
Our algorithm implies that core membership testing is efficient under any utility function that admits quick solving of convex programs as described above. However, it may still be desirable to test core membership as best as possible when the underlying utility function is either unknown or badly behaved and so exact convex optimization is impossible. Our algorithm can indeed be adapted to this effect, with a few significant points of caution, by substituting in modern approximate optimization algorithms. Due to space constraints, we include a discussion of this adaptation in Appendix B.
5 Acknowledgements
I am very grateful to Ben Golub, who introduced me to this area of game theory and many of the problems addressed in this paper, and who helped advise an early version of this research project. I am also grateful to Ben Hescott for mentorship and writing advice during the early stages of this project. Finally, I thank an anonymous reviewer for useful criticism and corrections on an earlier draft of this paper.
Appendix A Remaining Proof of Theorem 3
a.1 Proof of Lemmas 3.2 and 3.2
We begin with some useful wellknown technical claims. Proofs can be found in most textbooks on analysis and/or combinatorics, see e.g. [24, 7].
Claim 2.
For any fixed , the function takes values in for all (i.e. the limit is welldefined and not infinite on all indices) and is concave (which follows from the assumption that is concave).
Claim 3.
Any concave function with domain restricted to a bounded subset of is Lipschitz continuous. That is, for any bounded domain , there is a constant such that for any we have .
[Sperner’s Lemma]
Let be an simplex.
[Cantor’s Intersection Theorem] Let be an infinite sequence of nonempty sets that are closed and bounded, with for all . Then .
We now proceed with some original observations:
Claim 4.
For any outcome , if there exists a direction () for which , then there is no direction () for which .
Proof.
We will prove the claim for ; the case where follows from an identical argument.
Let such that , and let be an arbitrary direction that is not a scalar multiple of . We can then find a vector for which for some scalar , such that for some . We have that (by hypothesis) and (by the positive externalities assumption). Thus, by concavity of , we have , and so . Therefore we cannot have . ∎
Now we are ready to show: {lemma} [Lemma 3.2 in the body of the paper] At any outcome , exactly one of the following three conditions holds:

There exist directions such that and ,

There exist directions such that and , or

There exist directions such that and .
Proof.
First, we observe that Claim 4 implies that no can satisfy two of these conditions simultaneously: if satisfies the first condition then there is no satisfying the latter two conditions; if satisfies the last condition then there is no satisfying either of the former two conditions. Thus, it suffices to show that each point satisfies at least one of these three conditions, and we immediately have that it satisfies exactly one of these three conditions.
Second, by Claim 1, it suffices to show the existence of satisfying these conditions.
Now we begin the main proof. Let be the simplex in with vertex set equal to the basis vectors (where is the vector with a in the place and a elsewhere). If for any , then we have . By convex preferences we also have that satisfies , and so we may place into the first category of Lemma 3.2.
Otherwise, assume that for all . In this case, our goal is to show that there is with . Assign each agent a color, and color each point by the agent , breaking ties arbitrarily. Let be the face of opposite ; that is, is the set of for which . Let be any point in . Note that, by the assumption of positive externalities, we have ; since we have assumed that , this implies that is not colored by agent .
Let be a simplicization of such that each simplex fits in a ball of radius , for some that we will pick later. Since the face does not include any points with color , we may apply Sperner’s Lemma (Lemma 3) to conclude that there is a simplex whose vertices are rainbow (we assume without loss of generality that vertex has color ). Let be a point within distance of each vertex . Since , and since is Lipschitz continuous on (from Claims 2 and 3), it follows that there is a constant independent of such that for all . In other words, we have that , and so the set
is nonempty for all , since it contains when we choose .
Since the function is continuous, for any is closed, bounded, and nonempty. Thus, if we express the set
then we see that is the intersection of a descending sequence of sets that are closed, bounded, and nonempty. By Cantor’s Intersection Theorem (Lemma 3), is nonempty. Note that any satisfies , since if then by the positive externalities assumption we have . We may thus take to be any element of , and we have and .
We have now shown that there is a vector that satisfies either or . By an exactly symmetric argument, we have that there is a vector that satisfies either or . Thus, if then we may place into the first category, if then we may place into the last category, and if neither of these are true then we simultaneously have and and so we may place into the second category. ∎
Claim 5.
If there is a Pareto improvement on an outcome satisfying , then there is a Pareto improvement on satisfying .
Proof.
Let be the largest set of agents for which . If is empty then we have and we are done. Otherwise, suppose is nonempty. Define the point such that for all . By positive externalities, we have (where the latter inequality follows from the fact that is a Pareto improvement on ). Also by positive externalities, we have . Thus, is a Pareto improvement on which satisfies . By Claim 1, we may further assume that . ∎
[Lemma 3.2 in the body of the paper] The points in the second category of Lemma 3.2 are precisely the Pareto Frontier.
Proof.
It is clear that no outcome in the first or third category of Lemma 3.2 is Pareto efficient, since these points have a direction for which . Therefore the second category in Lemma 3.2 contains the Pareto frontier. Let be an outcome in the second category, and suppose towards a contradiction that is a Pareto improvement on .
a.2 Proof of Theorem 3
We assume that is a core outcome. By definition this implies that the grand coalition and singleton coalitions have no deviation from . Thus, it only remains to show that is path connected.
Claim 6.
Let be a point on the lower envelope of a path connected component of (i.e. there is no point in this component).
Proof.
Define to be the set of largest agents for which . By definition we have , so we only need to prove that .
Suppose towards a contradiction that there is an agent for whom and . We can then modify to a new point by slightly decreasing the action of agent . Since is continuous, if we make this decrease small enough then we still have . By positive externalities, we then have . Since , this contradicts the fact that is on the lower envelope of its connected component. ∎
If is a core outcome, then is path connected.
Proof.
We prove the contrapositive. Suppose is not path connected, and let be a path connected component of that does not include , and let be a point on the lower envelope of . Let be the smallest coalition such that and (the existence of such a coalition is given by Claim 6); note that and so is nonempty. Now consider the projected economy for . We have , so Lemma 3.2 applies to the outcome in this economy.
Our next step is to figure out which of the three categories in Lemma 3.2 contains in the projected economy. We observe that cannot fall into the first or second category of Lemma 3.2. This holds because we would have with , which means that we can slightly decrease the actions of the coalition in the original economy along the direction , giving a new point . By concavity of , we still have and so belongs to the same connected component of ; this contradicts the assumption that is on the lower envelope of . Thus is in the third category of Lemma 3.2 in the projected economy, so there is a direction with . If we move slightly in this direction , we obtain an outcome with . By setting , we then concatenate and to obtain a point , which is a deviation in the original economy for the coalition . ∎
Appendix B Adapting the Algorithm for Approximate Optimization
We conclude by discussing the feasibility of the algorithm on utility functions that are either unknown or badly behaved, where exact convex optimization might not be easily solvable. Here, one can use modern approximate convex optimization algorithms in the place of the exact solutions assumed in the above algorithm. These algorithms query the utility function and its derivatives at various points in the domain, ultimately finding a point that comes within an additive of the optimal value. Their theoretical runtime is typically a large polynomial in the dimensionality of the search space () but quite good in the accuracy parameter (); for example, randomized centerofgravity methods use time [6], leading to a runtime of for our algorithm. Further improvements are possible via more specialized optimization algorithms if one wishes to assume nice properties for the utility function. If one does not wish to assume that the utility function derivatives can be easily measured, these algorithms can be adapted to avoid this necessity, although at the cost of an additional factor of [27]. For a general reference on convex optimization, see [4].
Let us consider a natural relaxation in which the goal is to distinguish between core outcomes and outcomes that are far from being in the core (i.e. there is an “deviation” from satisfying for the deviating coalition ). Two tweaks to the algorithm/model will be needed to solve this relaxed problem with an approximate convex optimization algorithm.
b.1 First Change: A Bounded Action Space
First, caution is needed in the preprocessing stage where the Pareto efficiency of is tested. It is no longer feasible to use the derivativetesting technique employed here in Programs 1 and 2: the unbounded domain means that even a slightly positive entry in the directional derivatives can translate to an indefinitely large Pareto improvement by following this gradient out far enough. In other words, approximately negative directional derivatives do not translate to approximately Pareto efficient points. Perhaps the most natural workaround here is to simply enforce a bounded domain, i.e. assume that actions lie in the interval rather than . It seems fairly economically natural to assume a maximum amount of possible work for each agent.
With this assumption in hand, we can actually skip the entire preprocessing phase and move straight to the main loop where we iteratively shrink . The only purpose of the processing phase in the original algorithm is to bound the deviation search space via Lemma 3.1; if we assume a bounded action space in the first place, it is no longer needed.
b.2 Second Change: Dependence on the Condition Number
The second change needed in the algorithm is that we must incurr an additional dependence in runtime on the condition number of the economy
This dependence comes from the interplay between Programs 3 and 4. To demonstrate the issue, suppose we run Program 3 up to accuracy , returning a point . We can easily examine to distinguish the cases in which the current active coalition has an deviation versus no deviation at all. It is also quite straightforward to extend Lemmas 3.1 and 3.1. The trouble is that Lemma 4 breaks horribly; in particular, the vector returned by Program 4 might not even approximately satisfy .
We propose the following solution. First, we improve the accuracy of Program 3 from to , and again let be its output. Thus if is not a deviation, then there is no deviation from for . Second, we begin an iterative process where we repeatedly select an agent where , and we modify by decreasing the action of agent by . Note that we now (quite generously) have , and we may repeat the process on a new agent. In the worst case, we may need to repeat this process times before an agent’s action reaches . However, we can halt the process as soon as we slide to a point below the Pareto frontier, at which point we may simply compute the appropriate downwards derivative by Program 4. We were unable to determine if there actually exist utility functions in which the additional dependence of will actually be realized in most iterations of the algorithm.
Footnotes
 Some of the early work on cooperative game theory used the name contract curve instead of core. The term core was coined in [18].
 The Lindahl equilibria of an economy are the competitive equilibria that would be reached if market externalities were truthfully reported and then bought and sold on an open market. A formal definition is not necessary to read this paper, but can be found in e.g. [26, 11].
 Confusingly, when is mathematically concave, one says that preferences are “economically convex” – hence, convex preferences.
 This detail is precisely why we use Programs 1 and 2 to check the Pareto efficiency of , rather than the ostensibly simpler method of searching for that maximizes : the latter method requires a search for over an unbounded search space, which rules out many popular methods of convex optimization that we wish to keep available.
 Note that these statements hold specifically in the projected economy for .
 An simplex is an dimensional analogue of a triangle defined by vertices; e.g. a simplex is a triangle, a