Egalitarianism in the rank aggregation problem:
a new dimension for democracy
Winner selection by majority, in an election between two candidates, is the only rule compatible with democratic principles. Instead, when the candidates are three or more and the voters rank candidates in order of preference, there are no univocal criteria for the selection of the winning (consensus) ranking and the outcome is known to depend sensibly on the adopted rule. Building upon XVIII century Condorcet theory, whose idea was to maximize total voter satisfaction, we propose here the addition of a new basic principle (dimension) to guide the selection: satisfaction should be distributed among voters as equally as possible. With this new criterion we identify an optimal set of rankings. They range from the Condorcet solution to the one which is the most egalitarian with respect to the voters. We show that highly egalitarian rankings have the important property to be more stable with respect to fluctuations and that classical consensus rankings (Copeland, Tideman, Schulze) often turn out to be non optimal. The new dimension we have introduced provides, when used together with that of Condorcet, a clear classification of all the possible rankings. By increasing awareness in selecting a consensus ranking our method may lead to social choices which are more egalitarian compared to those achieved by presently available voting systems.
A voting process starts with individuals giving a formal indication of a choice (ballot) or, more generally, a set of preferences between two or more candidates (or alternatives)111For the sake of simplicity, we prefer not to discuss partial rankings, because the meaning of not ranking a candidate may change a lot from application to application.. The process ends with an aggregation procedure (winner selection method) of these indications, in order to produce the consensus ranking, that is the ranking on which voters should agree more upon and which should be the output of the election. The complexity of the selection process comes, in general, from the presence of competing interests and conflicting opinions which make it impossible to satisfy all the preferences expressed by the voters. With his seminal work on voting theory, Condorcet discovered  that the majority rule, applied to pairwise preferences, may lead to invalid solutions. For instance in an election among three candidates the preferences may sum up to prefer the first to the second, the second to the third and the third to the first. Similarly, from the formal logic perspective, Arrow’s theorem  states that a perfectly fair voting system can not exist. The lack of an ideal voting system in case of more than two candidates implies that any winner selection procedure contains some kind of arbitrariness and makes the studies on voting methods an interesting research problem.
Typical examples of voting processes are political elections . In that case the need of a single winner, or a single winning ranking, has encouraged the use of very elementary selection rules, easy to compute and to understand by voters and competitors at the expense of making sub-optimal choices. Voting theory include also cases beyond political matters. Survey rankings for instance, typically made for commercial purposes, like hotel listings, movie rankings, best product on the market etc, are selected with totally different criteria. The choice of the ten best smartphones, say, is not made by maximizing the voters total satisfaction, but rather to ensure that each customer finds, among those ten, a satisfactory model.
A similar problem is very much studied in computer science under the name of Rank Aggregation: a typical example is the merging of webpage rankings produced by different search engines or obtained according to different criteria . The main difference from the examples above is that here the number of voters (engines/criteria) is small, while the number of candidates (webpages) is large. This is why in that field of research the focus is more on the algorithmic challenge of computing the consensus ranking efficiently. Here we are more interested in presenting the new criterion for better selecting the consensus ranking; thus we concentrate on small number of candidates, so that all possible rankings (with ties) can be easily computed.
It is therefore clear that the problem of finding a good consensus ranking is an interdisciplinary topic of research: it is inspired and guided by studies in sociology, marketing, economy and political sciences. The disciplines technically involved in the solutions are statistics, mathematics and computer science.
Definition of the problem
Each of voters expresses a preference about candidates by sorting them in a ranked list, possibly with ties, resulting in ballots. Valid ranked lists for candidates are for instance , and . We call the ballot of voter . Each voter wishes the consensus ranking to be as close as possible to his ballot and, following Condorcet , a good winner selection method should work by maximizing the total sum of those wishes, i.e. minimizing the sum of the distances between the consensus ranking and the ballots. Therefore the search for a consensus ranking needs to be based on a notion of distance between the rankings. There are several definitions of distance between rankings and many studies on the relations among them . Among these, the Kemeny distance  is widely used due to its robust properties . Intuitively, when restricted to rankings without ties, is twice the minimum number of swaps of nearby candidates required to transform one ranking into another. Alternatively, it counts the number or pairwise preferences that do not match in the two rankings. When ties appear, these count in the distance, if they do not match between the two rankings. Since our theory is rather insensitive to the type of distance used, we will conventionally use Kemeny distance to develop the discussion in the next sections (see the Supplementary Material for a discussion on other distances, see also  and references therein).
The Condorcet consensus ranking has been defined as the ranking (or more properly the rankings) minimizing the function
in formulae, (see  for a review of mathematical methods in social choice theory). The computation of is in general a NP-hard problem, since the space of all possible rankings with ties grows faster than . In practice several polynomial time algorithms have been developed that return an approximated answer to the problem of selecting a consensus ranking. Most of these are the voting rules used in everyday applications. Among them it is worth recalling the Pairwise comparison (or Copeland), Schulze and Tideman methods, which are perhaps the most used single-round ranked-ballot winner selection methods (they are all described in the Supplementary Material).
None of the above voting methods is perfectly fair (in the sense of Arrow’s theorem), however they all return a “reasonable” consensus ranking, and this is why they are used in practical applications. Nonetheless some problems and inconsistencies remain unsolved: (i) different voting methods return different consensus rankings (this is the well known problem that the outcome of an election may very well depend on the electoral system); (ii) by returning a unique consensus ranking, a lot of information about voter preferences is lost; (iii) often there are consensus rankings with a value of very close to the optimal , and it is unclear why they should be discarded. It is worth noting that, in an election/survey with voters, fluctuations of in are somehow unavoidable: if , but with , then choosing as the consensus ranking instead of is equivalent to taking a decision based on the toss of a coin.
A new dimension for choosing the consensus ranking
In order to solve the above problems we suggest to consider as valid consensus rankings all the rankings close enough to the optimal one (i.e., those for which ), and we introduce a new dimension to select the best among these valid consensus rankings. Our idea is that not only the global (i.e., societal) number of satisfied preferences is to be maximized, but also each individual voter should have more or less the same number of satisfied preferences. With this aim we propose to consider also the voter-to-voter satisfaction variability (standard deviation) as
If , the consensus ranking satisfies equally each voter; while, if is large, then there are voters more satisfied and others less satisfied than the average. Clearly the smaller is the more egalitarian is .
To illustrate the new criterion, we start with a very simple (and almost paradoxical) example. We consider an election with candidates and we do not allow for ties; the number of possible rankings is , as shown in the table included in Figure 1. The distance between these 24 rankings can be easily visualized in the same figure, top left panel, which includes a graph where each vertex corresponds to a ranking, with an edge connecting rankings at distance 2 (differing only by a swap of two neighbouring candidates). For rankings at distance larger than 2, it is enough to count the edges along the shortest path between the rankings in this graph.
Suppose the electorate is equally polarized on two opposite rankings: half of the voters rank candidates and the other half . A simple calculation shows that any possible ranking has , therefore there is no way to choose one of them according to the Condorcet criterion alone. However the 24 possible consensus rankings have very different as can be seen in the middle panel of Figure 1: the point with largest corresponds to rankings and that fully satisfy half of the voters and fully deceive the second half, while the point with corresponds to the six rankings that are at the same distance from the ballots, thus satisfy them equally well. It is clear that the latter are the more egalitarian consensus rankings. In other words, spreading satisfaction as equally as possible among voters, i.e. minimizing , is a new criterion to select the consensus ranking that deserves, at least, the same consideration as the Condorcet criterion of minimizing .
Even more interesting is the case when some noise is added to the example above. For instance we can consider small fluctuations in the number of electors participating to the poll, resulting in a fraction of voters ranking the candidates as and the complement fraction ranking them as . For an election with voters a noise of order is somehow unavoidable. In lower panels in Figure 1 we report and values for the 24 possible consensus rankings. For , the small unbalance decreases for ranking , making it the consensus ranking under the Condorcet criterion. For , the opposite ranking would win. The difference between the two cases is, however, only due to noise; so selecting a consensus ranking by strictly minimizing would be equivalent to selecting the winner on a coin toss. Rankings with lower , as lower panels in Figure 1 shows, are much less sensitive to noise: by minimizing one gets always the same consensus rankings independently on the noise. This is a very important observation in favour of the new criterion, given that a fair voting system should be robust to noise induced fluctuations.
Although very simplified, the example above contains in a stylized form the relevant facts we have observed in real data, to be discussed below.
Analysis of data from real polls
We have two criteria for the identification of the best consensus ranking: minimizing and minimizing (among rankings of small ). In general is not possible to find a consensus ranking satisfying both criteria, and some compromise must be adopted, as we will exemplify with data from real polls.
The first dataset consists of ratings for jokes from the Jester database. The full dataset is made of 100 jokes rated by 24938 users. Ratings are continuous values between -10 and 10. We have selected the five jokes rated by most users, and considered only those users who rated all five jokes, resulting in 24921 voters. For each voter, the ballot is obtained by ranking the 5 jokes according to the continuous-valued rating.
In the upper panel of Figure 2 we show the values for all possible consensus rankings of the jokes: the 120 black circles correspond to rankings without ties, while gray diamonds are the 421 rankings with ties. One ranking was excluded from the plot, for better visualization: ranking at position (9.81,0.63). The consensus ranking minimizing is and has . However, close to we see a cloud of points with small values of . The lower panel in Figure 2 zooms over this set of rankings, all having a distance from the Condorcet optimum , comparable with fluctuations. So, from the point of view of the Condorcet criterion, all these rankings are equally good within the noise. On the contrary they show a much larger variation in , that changes between 3.36 and 4.29, allowing for a better consensus ranking selection by minimizing . The consensus ranking minimizing in this region is with coordinates . It seems to convey all the relevant information contained in this set of low rankings: indeed the only information shared by all the rankings in the lower panel of Figure 2 is that jokes and are better than jokes , and . Any consensus ranking more refined that would just amplify the noise, rather than providing further useful information.
Three commonly used winner selection methods were also applied to the data (Copeland, Schulze and Tideman222These three algorithms have been chosen also because they never rank first a Condorcet loser.), and the corresponding consensus rankings are marked in Figure 2. All of them rank jokes as with . This consensus ranking differs from , the Condorcet consensus ranking, and it has a quite large value, hence being among the less egalitarian rankings.
In applying the criterion of minimizing one has to be careful, because this criterion tends to select consensus rankings with ties (gray diamonds are on average below black circles in Figure 2). If ties are not allowed in the consensus ranking, one should focus only on black points in Figure 2: even in this case, the consensus ranking with looks much more egalitarian than the consensus ranking found by common voting methods: it gains more than 10% in , while loosing just 1% in . The final decision on which rule is to be used to select the consensus ranking is left to the organizers of the poll/survey, but clearly a plot in the plane is much more informative than any previously available method.
Similar to the simple example discussed earlier, the data from real polls also show that consensus rankings of smaller are less sensitive to noise. In this case we investigate the effect of small fluctuations in participation by using a subsampling procedure: from the joke ratings provided by 24921 users, a fraction of randomly chosen votes has been removed, and and recomputed. Resampling was repeated 100 times with and . From the variations of and between different subsamplings we may compute the noise fluctuations on and (see Supplementary Material for a detailed definition of the fluctuation scaling). In Figure 3 these fluctuations are reported, showing a very clear and strong correlation with the value of . A good consensus ranking should be as robust as possible to noise produced by fluctuations in e.g. the number of voters. For example, suppose a poll/survey is run for 10 days, then the outcome of the survey is reliable if it does not change sensibly in case the data were collected for one or two days less. What we observe in Figure 3 is that noise sensitivity is larger for points of large , while no relation can be observed between noise sensitivity and . So, choosing a consensus ranking according to the new criterion of minimizing , provides in general a result much more robust to noise (e.g. unavoidable fluctuations in the number of participants to the poll/survey/election). We also analysed noise in opinion for this dataset, similar to the analytical example of Figure 1, and results show same robustness for rankings with lower (see Supplementary Material for details).
The second example from real polls considers the rankings of 5 movies provided by 930 users. These are a subset of a larger database consisting of 1,000,209 ratings from 6040 users for 3952 movies . Here, users rated the movies on a discrete scale from 1 to 5. As before, we sorted the movies for each user, to obtain the ballots. Since equal ratings are very probable here, given that only 5 possible rating values exist, many ballots have ties.
Once again the plot in , shown in Figure 4 is very informative. First of all we notice that consensus rankings with ties, although having much smaller values of , are not chosen by any commonly used voting method. Moreover the optimal consensus ranking according to the Condorcet criterion seems to have a very large value for . There are a few other rankings worth to be considered, that have slightly higher but much lower . Indeed we identify a set of optimal rankings (red diamonds in Figure 4) combining the two criteria. These optimal rankings start from the Condorcet ranking and include other rankings in the bottom left part of the plot, that cannot be improved in terms of both and (the Supplementary Material includes a more formal definition of this sequence). In the example from the movie data, two additional rankings should be considered, along with the Condorcet ranking, to be part of the optimal set: these are with , and with , both are red-marked in the bottom left corner of the plot. We suggest that the consensus ranking should be selected from the optimal set, and the choice should be made after careful analysis of the plot. The set of optimal rankings resembles somehow the Pareto efficient frontier used in economic theory .
Both examples above have a large number of voters and one may think that the complex behaviour we have illustrated could be due to the large number of voters. This is not actually the case, as we are going to show with an example from a poll with a small number of voters (), that ranked alternatives. This is a poll organized on the Airesis platform , which is a web platform freely available to organizations to manage internal decision making. The data shown in Figure 5 represent a real election where the consensus ranking has been selected according to the Schulze method. The first evidence is that the consensus ranking of that election (Schulze) is far from the optimal one: the Condorcet optimal consensus ranking is better (i.e. lower) both in and . Additionally, a large number of rankings are part of the optimal set, defined previously, and marked with red diamonds in the plot, which should be taken into consideration. Even willing to restrict to consensus rankings without ties (this is an election, and ties may be problematic for the decision process), it is clear that the consensus ranking selected by Schulze, with , has a quite large with respect to consensus rankings with , and with . The latter correspond to the two leftmost purple circles in Figure 5.
We have proposed to analyze voting results by plotting potentially winning rankings on the plane in such a way that both the Condorcet criterion and the new criterion that we have introduced can be considered at the same time in order to identify the optimal consensus ranking. To help in this new analysis we have set up a webpage with an interactive tool that produces the graph in the plane , once the list of ranked ballots is given as input.333All plots in this manuscript, using the standard Kemeny distance, are based on those produced by the web tool. A publicly available Android mobile application has also been developed, to facilitate organisation of large scale ranked-ballot polls and collection of new data for future studies . We have analyzed many different datasets coming from real polls and in general the plots in the plane are similar to those shown above. Moreover we expect polynomial time algorithms can be developed that minimize (approximately) both and in analogy to presently used voting rules that tend to minimize only .
Once the graph in the plane is available, we believe any good consensus ranking should be chosen from the optimal set. A point belongs to the optimal set if no other point exists improving both in and or improving only one of them while keeping the other constant. This set has been red-marked in the examples above and it extends from the Condorcet optimal ranking , that minimizes , to the ranking minimizing . The meaning of moving along this set should be clear: while the ranking maximizes total societal satisfaction ignoring individual satisfaction, is the more egalitarian in terms of individual satisfaction regardless of the total satisfaction. There are polls, like political elections, where the consensus ranking must produce a unique winner among the candidates. In this case one can restrict the analysis to rankings having no tie at the first position and a line of optimal rankings can be defined as well in this subset of rankings.
It is important to stress that we are not claiming that should be the consensus ranking: often is much larger that and the optimal consensus ranking is actually in the middle of the optimal set. Instead, we are proposing a new tool that provides a quantitative meaning to each possible choice. Which consensus ranking should be chosen among the optimal set is no longer a technical matter, it is rather a decision to be taken by the people in charge and the criteria may change according to the domains: political elections, marketing, web page ranking, etc. In some cases, like for instance in political election, the decision on which point to select along this line must be taken before the poll is run.
The cases where the plot in the plane is even more useful is when the final decision can be taken after the poll/survey is run. In this case having a data aggregation like the one we are presenting in terms of and provides a lot of information and allows for a much better choice. A typical example is when politicians want to decide a list of priorities based on suggestions coming from the electorate: the politicians can run a poll/survey among the electorate and this would determine the optimal rankings, leaving to the politicians the final choice of the consensus ranking, to be chosen among those. We believe this is an ideal compromise between taking in serious consideration the desiderata of the electorate (the line of optimal consensus rankings is fully determined by the votes) and leaving the political decision to those in charge.
It is worth mentioning that the applications where technical tools provide a set of optimal preferences among which the final choice is left to the user are not new in other fields. For example in quantitative financial risk management the mathematical analysis produces a risk-return curve (called efficient frontier ) and the choice of a point along such a curve is left to the investor. From a different perspective a voting theory purely based on the maximization of voters satisfactions would be equivalent, in political economy, to the maximization of total wealth in a country regardless of its distribution and welfare criteria.
The voting method we have presented here provides an efficient technical tool to determine the line of optimal rankings, among which a political decision has to be taken. While it is generally understood and acknowledged that democratic organizations should not only maximize their goods but also distribute them as equally as possible, such awareness did not lead so far to a proper solution in social choice theory. We believe therefore that the quantitative method we have introduced is a fundamental tool to apply democratic principles, especially in voting processes.
We thank Flavio Chierichetti for drawing our attention to the rank aggregation problem, and the Airesis platform for providing access to their data. This work has received financial support from the Italian Research Ministry through the FIRB projects No. RBFR086NN1 and RBFR10N90W and PRIN project No. 2010HXAW77. Mobile application development was performed by Federico Ponzi, with partial financial support from New York University Shanghai.
References and Notes
-  Marie Jean Antoine Nicolas de Caritat, Marquis de Condorcet, Essai sur l’application de l’analyse à la probabilité des décisions rendus à la pluralité des voix (1785).
-  K. J. Arrow, A Difficulty in the Concept of Social Welfare, Journal of Political Economy 58, 328 (1950).
-  C. Borgers, Mathematics of Social Choice: Voting, Compensation, and Division (SIAM, 2010). http://dx.doi.org/10.1137/1.9780898717624
-  C. Dwork, R. Kumar, M. Naor and D. Sivakumar, Rank Aggregation Methods for the Web, in Proc. 10th Intl. Conf. on World Wide Web (WWW ’01), pp 613–622 (2001). http://doi.acm.org/10.1145/371920.372165
-  R. Fagin, R. Kumar, M. Mahdian, D. Sivakumar, and E. Vee, Comparing and aggregating rankings with ties, in Proc. twenty-third ACM SIGMOD-SIGACT-SIGART symposium on Principles of Database Systems (PODS ’04), pp 47–58 (2004). http://doi.acm.org/10.1145/1055558.1055568
-  J. G. Kemeny, Mathematics without numbers, Daedalus 88, 571 (1959). J. G. Kemeny and J. L. Snell, Mathematical Models in the Social Sciences (Blaisdell, New York, 1962).
-  W. J. Heiser and A. D’Amborsio, Clustering and Prediction of Rankings Within a Kemeny Distance Framework, in L. Berthold, D. Van den Poel, A. Ultsch (eds), Algorithms from and for Nature and Life. pp 19–31 (Springer, 2013).
-  M. Truchon, Aggregation of Rankings: a Brief Review of Distance-based Rules and Loss Functions for the Expected Loss Approach, CIRPÉE Working Paper No. 05–34 (2005). http://dx.doi.org/10.2139/ssrn.984305
-  B. Monjardet, ‘Mathématique Sociale’ and Mathematics. A case study: Condorcet’s effect and medians, Electronic Journal for History of Probability and Statistics, 4, pp 1–25 (2008).
-  Data is publicly available at http://www.ieor.berkeley.edu/~goldberg/jester-data/
-  Data is publicly available at http://grouplens.org/datasets/movielens/
-  A. M. Feldman and R. Serrano, Welfare Economics and Social Choice Theory (Springer, 2006).
-  http://www.airesis.it
-  http://www.sapienzaapps.it/rateit.php
-  RankIt, Android Mobile application, available on Google Play, https://play.google.com/store/apps/details?id=sapienza.informatica.rankit
-  H. Markowitz, Portfolio Selection, Journal of Finance 7, pp 77–99 (1952).
Supplementary Material for Egalitarianism in the rank aggregation problem: a new dimension for democracy
The set of rankings without ties for candidates, , has cardinality . Let us call the cardinality of the set of rankings including ties, . are sometimes called Fubini, or Cayley numbers. One can show (see  and references therein) that their exponential generating function is
whose radius of convergence is . This can be used to find from derivatives and gives , , , , , , , , , , , etc. These numbers grow according to the formula
with a sub-leading correction decaying exponentially fast
One can also consider the set of rankings with ties, where : . Clearly and . Another interesting set for applications is the set containing rankings where the first candidate is untied. For each set of rankings our method provides a subset of optimal rankings according to the following definition.
For two rankings and in , we say improves if and , and at least one of the two inequalities is strict. The optimal set is the set of points in that cannot be improved by other elements of .
In the web platform that we have developed  we show the global optimal set (red diamonds) and the one with no ties (purple circles). In other contexts, like engineering or economics, the optimal set of vectors of a -dimensional space is called Pareto frontier . In general the computation of such an optimal set requires a time proportional to the cardinality , that is a time exponential in the number of candidates.
Indeed also the computation of the Condorcet optimal consensus ranking with Kemeny distances (which is one element of the optimal set) is in general a NP-hard problem. However, if the ranked ballots given in input are not too dissimilar, such an optimum can be computed in polynomial time . Nonetheless the cases where our new criterion is meaningful are exactly those where the ranked ballots are not too similar. We believe that for the computation of the optimal set in the large limit, one should resort to Monte Carlo methods, already successfully used in the computation of Kemeny optimal rankings .
Distribution of distances
For the joke dataset discussed in the main paper, Figure 6 presents a deeper analysis of the ten rankings belonging to the optimal set. These range from the Condorcet solution , with minimal but large , to the solution with minimum , but larger . The figure shows the distribution of the distance between these rankings and the voter ballots (in other words distribution of voter dissatisfaction for these rankings). These distributions change from wide for the Condorcet ranking, where voter satisfaction is very uneven, to more narrow distributions as decreases, and satisfaction becomes more comparable among voters.
It is worth noting that the consensus ranking (fourth subplot on the first line in Figure 6) that we have proposed as the optimal one, based on the criterion of minimizing among rankings of low , is indeed the one that avoids strongly unsatisfied voters (distances larger than 15 are almost absent), without changing sensibly the mean value of the distribution.
Distance between rankings
The Kemeny distance  , that we have used in the main text, is one of the possible means of quantifying how dissimilar two rankings and are. Intuitively, the distance relates to how many pairwise comparisons of candidates do not match between the two rankings. For instance, if candidate is preferred to candidate in one ranking, but is preferred to in the other, that would count 1 in the distance. If one ranking considers while the other does not, then that would count in the distance. By summing over all possible pairs, with and counted separately, one obtains the Kemeny distance between the two rankings.
The computation of is simpler if rankings are rewritten in terms of the score matrices :
The Kemeny distance between rankings and is thus given by
Some winner selection methods
In the plots in the main paper we have shown the consensus rankings obtained by some well-known winner selection methods, Copeland, Schulze and Tideman . Here we provide a detailed description of these methods, which are the most commonly used in situations where the voter ballots are lists of candidates ordered by preference (ranked ballots).
We consider the same situation as in the main paper, where voters express their preferences about candidates. The ballot of each voter can be conveniently mapped in a vector of integers representing the positions of each candidate in the preference list. For example the ballot corresponds to the vector . From the vectors , with , representing the voter ballots we can build the matrix of total preferences whose elements are
where the indicator function is defined as
In practice the matrix element counts how many voters prefer candidate to candidate . The result of any voting method based only on pairwise comparisons between candidates can be obtained from matrix .
A method of selecting a consensus ranking based on scores is Copeland, also known as the pairwise comparison. Candidates are ranked according to the score that counts the number of pairwise comparisons won plus half of those tied
The Copeland candidate(s) is the one with maximum .
The Schulze method is also based on pairwise comparisons between candidates. To compute the Schulze ranking from the matrix we first have to compute the matrix of beatpaths, by initialing it as and then iterating until convergence
The number of iterations to make the matrix converge is given by the length of the longest beatpath, which is at most the number of candidates . Successively, candidates are ranked according to a score similar to the pairwise one for the matrix, that is
The Schulze candidate(s) is the one with maximum .
Tideman is a further method of selecting a consensus ranking. To compute the Tideman solution, the elements of matrix are sorted in a decreasing order and taken into account one by one. When element is considered, the relative order in the final ranking is assigned unless in contrast with the partial ordering already fixed by larger values of previously considered.
Statistics of the raw data
In order to provide a better view over the datasets used in the main paper, we include here the distributions of rating values for the five jokes and five movies analyzed (Figures 7 and 8, respectively).
|Number of voters||Ballot value|
Joke data have wide distributions for the ratings of each joke, with jokes D and E having right-skewed distributions unlike the other jokes, thus explaining their supremacy in the consensus rankings provided by different winner selection methods (see discussion of Figure 2 in the main text). It is worth noting that, although mean ratings for jokes D and E are larger than mean ratings for jokes A, B and C, any other difference among the means is not significative, given that rating standard deviations are of order 5.
For the AIRESIS dataset analyzed in the main paper, we include directly the votes expressed by users, since a small number of ballots are present here. Table 1 shows the number of voters opting for a particular ranked ballot.
Further results on real polls
Due to space restrictions, the discussion in the main paper was reduced to only a few examples of vote instances. However the new method of analysis we have introduced seems to work nicely on any voting instance we have studied. Here we report some more results from jokes and Airesis data in the usual two-dimensional space .
Figure 9 plots and for another set of five jokes from the Jester database, rated by 16049 voters. As before, for each voter the ranked ballot is obtained by ordering the five jokes based on the rating the voter provided for them. The figure shows that standard methods obtain good results in terms of , however can be improved by selecting a different consensus ranking, which in this case contains equalities.
Further examples of data from the AIRESIS platform are shown in Figure 10. These show again that, in general, existing winner selection methods are not optimal with respect to minimizing (and sometimes neither in minimizing ). A clear improvement can be achieved by considering rankings in the optimal set we have introduced.
Different distance measures
The new method proposed in this paper has been illustrated using the Kemeny distance among rankings. There are of course many different distances to be considered and the choice among them depends mostly on the application. There are classes of distances bounded from above and below by multiplicative constants .
An important generalization of a given distance is to introduce weights. In some problems there are, for instance, candidates of different importance and it is possible to introduce candidate weights. In other problems the position where the candidate is in the ballot counts, e.g. candidates near the top of the ballot are more likely to be important than those at the bottom, so it may be useful to introduce position weights. Weights can be introduced also by dividing candidates in homogeneity classes: i.e. swaps of two candidates belonging to the same class have a smaller weight than swaps between different classes. The properties of these weighted distances are studied in .
Moreover, beyond the distances in metrics, like the Kemeny one, it is natural to consider also distances in metrics  or even higher order metrics.
We have checked several of the above generalizations and found that the general structure of the rankings in the space, and hence the conclusions drawn in the main paper, does not change with the distance measure. As a representative example we show in Figure 11 the same data from the joke ratings presented in Figure 2 of the main paper, but using a Kemeny distance modified with position weights. This gives more importance to candidates at the top of a voter ballot (a pair swapped at the top counts more in the distance than a pair swapped at the bottom of the ranking). Specifically, in the computation of the distance, equation 4 is replaced by:
The structure of the ranking space is mostly unchanged and indeed the consensus ranking that we identify through the minimization of among the low rankings is still the same one we found with the standard Kemeny distance, i.e. .
Another possible generalization is to measure the fluctuations in the voters’ satisfaction, that is the in formula (2), not only through the measure of the standard deviation of the distances, but also by the average of the absolute deviations from the mean. Since the two measures are both robust estimators of the fluctuations, we expect them to give similar results when the number of voters is large.
Noise sensitivity and how to estimate the uncertainty on and
A very important feature of a fair voting method is the robustness with respect to small noise fluctuations, that is one would like to avoid a consensus ranking changing drastically because of some noise induced fluctuations. When running a poll, there may be several different sources of noise, the simplest one being fluctuations in the number of participants. In Fig. 3 in the main paper we have shown fluctuations in and induced by a random elimination of a fraction of votes, uniformly chosen among the votes available. Data shown in Fig. 3 have been obtained with and , and they show a rather good collapse once scaled according to the following scaling.
Given a set of votes, where ranking appears times, the subsampling produces a subset of votes with frequencies Binomially distributed in , with mean and variance . Calling the mean Kemeny distance computed with frequencies and those computed with frequencies , a simple computation based on the assumption of independence of the , shows that the variance over many subsamplings scales proportionally to . In other words the quantity should be roughly -independent (as shown in Fig. 3) and is closely related to the uncertainty on .
A standard way to compute the uncertainty on any experimental measure is the so-called bootstrap method of analysis: starting from the original data set of measures, new data sets of measures each can be obtained by randomly choosing the original measures with repetitions (that is each measure appears in a new data set a number of times which is a Poisson random variable of mean 1). For any observable, e.g. , computed on the original data set, the uncertainty can be computed from its fluctuations among the new data sets. In Fig. 12 we show uncertainties for and , and we notice they are very close to fluctuations induced by subsampling (shown in Fig. 3).
References and Notes
-  http://oeis.org/A000670
-  http://www.sapienzaapps.it/rateit.php?c=3
-  A. M. Feldman and R. Serrano, Welfare Economics and Social Choice Theory (Springer, 2006).
-  P. Godfrey, R. Shipley and J. Gryz, Algorithms and analyses for maximal vector computation, The VLDB Journal 16, pp 5–28 (2007). http://dx.doi.org/10.1007/s00778-006-0029-7
-  N. Betzler, M. R. Fellows, J. Guo, R. Niedermeier and F. A. Rosamond, Fixed-parameter algorithms for Kemeny rankings, Theoretical Computer Science 410, 4554 -4570 (2009). http://dx.doi.org/10.1016/j.tcs.2009.08.033
-  M. E. Renda and U. Straccia, Web Metasearch: Rank vs. Score Based Rank Aggregation Methods, in Proceedings of the 2003 ACM Symposium on Applied Computing (SAC ’03), pp 841–846 (2003). http://doi.acm.org/10.1145/952532.952698
-  J. G. Kemeny, Mathematics without numbers, Daedalus 88, 571 ( 1959). J. G. Kemeny and J. L. Snell, Mathematical Models in the Social Sciences (Blaisdell, New York, 1962).
-  C. Borgers, Mathematics of Social Choice: Voting, Compensation, and Division (SIAM, 2010). http://dx.doi.org/10.1137/1.9780898717624
-  P. Diaconis and R. Graham, Spearman footrule as a measure of disarray, Journal of the Royal Statistical Society B (Methodological) 39, pp 262 -268 (1977).
-  R. Kumar and S. Vassilvitskii, Generalized distances among rankings, in Proc. 19th Intl. Conf. on World Wide Web (WWW ’10), pp 571–580 (2010). http://doi.acm.org/10.1145/1772690.1772749
-  W. D. Cook and L. M. Seiford, On the Borda-Kendall Consensus Method for Priority Ranking Problems, Management Science 28, 621–637 (1982).