1 Introduction

Who's good this year? Comparing the Information Content of Games in the Four Major US Sports

Who’s good this year? Comparing the Information Content of Games in the Four Major US Sports

Julian Wolfson and Joseph S. Koopmeiners

Division of Biostatistics, University of Minnesota, Minneapolis, Minnesota

Corresponding author’s email address: julianw@umn.edu

July 23, 2018

In the four major North American professional sports (baseball, basketball, football, and hockey), the primary purpose of the regular season is to determine which teams most deserve to advance to the playoffs. Interestingly, while the ultimate goal of identifying the best teams is the same, the number of regular season games played differs dramatically between the sports, ranging from 16 (football) to 82 (basketball and hockey) to 162 (baseball). Though length of season is partially determined by many factors including travel logistics, rest requirements, playoff structure and television contracts, it is hard to reconcile the 10-fold difference in the number of games between, for example, the NFL and MLB unless football games are somehow more “informative” than baseball games. In this paper, we aim to quantify the amount of information games yield about the relative strength of the teams involved. Our strategy is to assess how well simple paired comparison models fitted from % of the games within a season predict the outcomes of the remaining ()% of games, for multiple values of . We compare the resulting predictive accuracy curves between seasons within the same sport and across all four sports, and find dramatic differences in the amount of information yielded by individual game results in the four major U.S. sports.

1 Introduction

In week 14 of the 2012 NFL season, the 9-3 New England Patriots squared off on Monday Night Football against the 11-1 Houston Texans in a game with major implications for both teams. At the time, the Texans had the best record in the AFC and were in line to earn home-field advantage throughout the playoffs, while the New England Patriots had the best record in their division and were hoping to solidify their playoff position and establish themselves as the favorites in the AFC. The Patriots ultimately defeated the Texans 42-14, which led some commentators to conclude that the Patriots were the favorites to win the Super Bowl (Walker, 2012; MacMullan, 2012) and that Tom Brady was the favorite for the MVP award (Reiss, 2012), while others opined that the Texans were closer to pretenders than the contenders they appeared to be for the first 13 weeks of the season (Kuharsky, 2012). These are strong conclusions to reach based on the results of a single game, but the power of such “statement games” is accepted wisdom in the NFL. In contrast, it is rare for the outcome of a single regular-season game to create or change the narrative about a team in the NBA, NHL, or MLB. While one might argue that the shorter NFL season simply drives commentators to imbue each game with greater metaphysical meaning, an alternative explanation is that the outcome of a single NFL contest actually does carry more information about the relative strengths of the teams involved than a single game result in the other major North American professional sports. In this paper, we ask and attempt to answer the basic question: how much does the outcome of a single game tell us about the relative strength of the two teams involved?

In the four major North American professional sports (baseball, basketball, football, and hockey), the primary purpose of the regular season is to determine which teams most deserve to advance to the playoffs. Interestingly, while the ultimate goal of identifying the best teams is the same, the number of regular season games played differs dramatically between the sports, ranging from 16 (football) to 82 (basketball and hockey) to 162 (baseball). Though length of season is partially determined by many factors including travel logistics, rest requirements, playoff structure and television contracts, it is hard to reconcile the 10-fold difference in the number of games in the NFL and MLB seasons unless games in the former are somehow more informative about team abilities than games in the latter. Indeed, while it would be near-heresy to determine playoff eligibility based on 16 games of an MLB season (even if each of the 16 games was against a different opponent), this number of games is considered adequate for the same purpose in the NFL.

There is a well-developed literature on the topic of competitive balance and parity in sports leagues (Owen, 2010; Horowitz, 1997; Mizak et al., 2005; Lee, 2010; Hamlen, 2007; Cain and Haddock, 2006; Larsen et al., 2006; Ben-Naim et al., 2006; Késenne, 2000; Vrooman, 1995; Koopmeiners, 2012). However, most papers focus on quantifying the degree of team parity over consecutive years along with the effects of measures taken to increase or decrease it. In papers which compare multiple sports, season length is often viewed as a nuisance parameter to be adjusted for rather than a focus of inquiry. Little attention has been directed at the question of how information on relative team strength accrues over the course of a single season.

In this paper, we aim to quantify the amount of information each games yields about the relative strength of the teams involved. We estimate team strength via paired-comparison (Bradley and Terry, 1952) and margin-of-victory models which have been applied to ranking teams in a variety of sports (McHale and Morton, 2011; Koehler and Ridpath, 1982; Sire and Redner, 2009; Martin, 1999). The growth in information about the relative strength of teams is quantified by considering how these comparison models fitted from % of the games in a season predict the outcomes of the remaining ()% of games, for multiple values of (games are partitioned into training and test sets at random to reduce the impact of longitudinal trends over the course of a season). We begin by describing the data and analysis methods we used in Section 2. Section 3 presents results from recent seasons of the four major North American sports, and compares the “information content” of games across the four sports. In Section 4 we discuss the strengths and limitations of our analysis.

2 Methods

2.1 Data

We consider game results (home and away score) for the 2004-2012 seasons for the NFL, the 2003-2004 to 2012-2013 seasons of the NBA, the 2005-2006 to 2012-2013 seasons of the NHL, and the 2006-2012 seasons of MLB. Game results for the NFL, NBA and NHL were downloaded from Sports-Reference.com (Drinen, 2014) and game results for MLB were downloaded from Retrosheet (Smith, 2014). Only regular season games were considered in our analysis. The NFL plays a 16-game regular season, the NBA and NHL play 82 regular season games and MLB plays 162 regular season games.

2.2 Methods

Let represent all the games within a single season of a particular sport. Our goal is to quantify the amount of information on relative team strength contained in the outcomes of a set of games , as the number of games contained in varies. We consider how well the results of games in the “training set” allow us to predict the outcomes of games in a “test set” . Specifically, given and , our information metric (which we formally define later) is the percentage of games in which are correctly predicted using a paired comparison model applied to .

We consider two types of paired comparison models in our work. Each game provides information on the home team (), away team () and the game result as viewed from the home team’s perspective. When only the binary win/loss game result is considered, we fit a standard Bradley-Terry model (Bradley and Terry, 1952; Agresti, 2002),

(1)

where is the probability that the home team, team , defeats the visiting team, team . and are the team strength parameters for teams and , respectively, and is a home-field advantage parameter.

We fit a similar model when the actual game scores are considered. In this context, home team margin of victory (MOV) is recorded for each game; is positive for a home team win, negative for a home team loss, and zero for a tie. The paired comparison model incorporating margin of victory is:

(2)

where is the expected margin of victory for the home team, team , over the visiting team, team . and are team strengths on the margin-of-victory scale for teams and , respectively, and is a home-field advantage on the margin-of-victory scale.

Both models (1) and (2) can be fit using standard statistical software, such as R (R Core Team, 2013). Given estimates , , and derived by fitting model (1) to a set of games , a predicted home team win probability can be derived for every game based on which teams and are involved. A binary win/loss prediction for the home team is obtained according to whether is greater/less than 0.5. Given estimates , , and from fitting model (2), home team margin of victory can similarly be predicted for every game in . A binary win/loss prediction for the home team is obtained according to whether is positive, negative, or zero.

Our metrics for summarizing the amount of information on relative team strength available from a set of game results for predicting the outcomes of a set of games in are simply the fraction of games that are correctly predicted by the paired comparison models:

where and are estimates derived from game results in , and denotes the number of games in .

For a given season, training data sets were formed by randomly sampling games corresponding to X% of that season. Test data sets were created as the within-season complements of the training sets, i.e., if consists of a number of games corresponding to X% of the season, then contains the remaining (100-X)% of games in that season. Training (and corresponding test) data sets were created for X% = 12.5%, 25.0%, 37.5%, 50.0%, 62.5%, 75.0% and 87.5% of the games in each available season. Games were sampled at random so as to reduce the influence of temporal trends over the course of a season, for example, baseball teams who are out of playoff contention trading away valuable players and giving playing time to minor league prospects in August and September.

Information on relative team strength over a single season was computed and summarized as follows:

  1. For X = 12.5, 25, 37.5, 50, 62.5, 75, and 87.5:

    1. Generate 100 training sets (and complementary test sets ) by randomly sampling X% of games without replacement from .

    2. For each training set :

      1. Fit models (1) and (2) to the games in .

      2. Obtain binary win/loss predictions for the games in the test set .

      3. Evaluate the information metrics and

    3. Average the computed information metrics to estimate the predictive accuracy of paired comparison models fitted to data from X% of the entire season ( and ).

  2. Tabulate and plot and across different values of X.

The natural comparison value for our information metrics is the predictive accuracy of a naive model which chooses the home team to win every game. As shown in the plots below the average win probability for the home team (as determined by the parameters and in models (1) and (2) respectively) varies from approximately 53% to 61% across the four sports we consider.

3 Results

3.1 National Football League

Figure 1: Percent of games correctly predicted on test set vs. average number of games per team in training set, NFL seasons 2004-2012.

Figure 1 plots the percent of games correctly predicted on the test set versus the average number of games per team in the training set for the 2004-2012 National Football League seasons. Both paired comparison models (i.e., those which incorporate and ignore margin of victory) outperform simply picking the home team to win every game. The margin of victory model appears to perform slightly better than the paired comparison model, though the differences are modest and in some seasons (e.g., 2004 and 2008) are non-existent. The prediction accuracy of both models improves throughout the season in most seasons (years 2008 and 2009 being notable exceptions), indicating that we are gaining information about the relative strengths of teams even in the final weeks of the season.

3.2 National Basketball Association

Figure 2: Percent of games correctly predicted on test set vs. average number of games per team in training set, NBA seasons 2003-2013.

Results for the National Basketball Association can be found in Figure 2. The NBA was the most predictable of the four major North American professional sports leagues. Using 87.5% of games as a training set, our model was able to accurately predict up to 70% across seasons. The NBA also had the largest home court advantage with home teams winning approximately 60% of games. There was virtually no advantage in including margin of victory in our model; indeed, it led to slightly worse predictions during the 05-06 season. The only major difference between the NFL and NBA was the growth of information over the season. While the accuracy of our predictions for the NFL continued to improve as more games were added to the training set, model accuracy for the NBA was no better when 75% of games were included in the training set than when 25% of games were included. Analyses using the segmented package in R for fitting piecewise linear models (Muggeo, 2003, 2008) confirmed an inflection point in the prediction accuracy curve approximately 25-30 games into the season.

3.3 Major League Baseball and the National Hockey League

Figure 3: Percent of games correctly predicted on test set vs. average number of games per team in training set, MLB seasons 2006-2012.
Figure 4: Percent of games correctly predicted on test set vs. average number of games per team in training set, NHL seasons 2005-2013.

Results from Major League Baseball and the National Hockey League are found in Figures 3 and  4, respectively. The results for MLB and the NHL were quite similar, in that both leagues were substantially less predictable than the NFL and NBA. The percentage of games correctly predicted for MLB never exceeded 58% even when 140 games (7/8 of a season) were included in the training set. The NHL was slightly better but our model was never able to predict more than 60% of games correctly (and this was only achieved in the 2005-2006 season when the home team win probability was relatively high at 58%). More importantly, prediction accuracy was rarely more than 2-3 percentage points better than the simple strategy of picking the home team in every game for either league. In fact, during the 2007-2008 and 2011-2012 seasons picking the home team performed better than paired comparison models constructed using a half-season’s worth of game results.

It is perhaps not surprising that the outcome of a randomly chosen baseball game is hard to predict based on previous game results given the significant role that the starting pitcher plays in determining the likelihood of winning. In a sense, the “effective season length” of MLB is far less than 162 games because each team-pitcher pair carries a different win probability. In additional analyses (results not shown), we fit paired comparison models including a starting pitcher effect, but this did not substantially affect our results.

3.4 Comparing the sports

Figure 5 displays curves of summarizing predictive accuracy of the MOV model for the four major sports, aggregated across the years of available data (results from the win-loss model were similar). We see that, even after only 1/8th of the games in a season have been played, substantial information on relative team strength has already accrued in the NBA, while much less can be said at this point about the NFL, NHL, and MLB. Predictive accuracy increases most rapidly with additional games in the NFL, so that predictive accuracy approaches that of the NBA when a substantial fraction of games are used for prediction. As seen above, the overall predictive accuracies for the MLB and NHL remain low, and do not increase markedly with the fraction of games in the training set.

Figure 5: Percent of games correctly predicted by margin of victory model on test set vs. percent of season in training set, for four major U.S. sports leagues. For each sport, connected plotting symbols represent the average predictive accuracy and shaded regions enclose the range of predictive accuracies across the seasons of available data.

Table 1 gives one way of summarizing the informativeness of games in the four major sports, via an odds ratio comparing the predictive accuracy of two models: 1) the MOV paired comparison model using game data from 87.5% of the season, and 2) a prediction “model” which always picks the home team. There is a clear separation between the NFL/NBA, where games played during the season improve the odds of making correct predictions by about 40% over a “home-field advantage only” model, and the NHL/MLB, where the improvement is under 10%.

OR
NBA 1.41
NFL 1.46
NHL 1.09
MLB 1.06
Table 1: Odds ratio comparing the predictive accuracy of a MOV paired comparison model using data from 87.5% of a season to the accuracy of a model which always picks the home team.

Table 2 summarizes the per-game rate of increase in predictive model accuracy for the four sports. The estimates are obtained by fitting least-squares regression lines to the data displayed in Figure 5. The lines for each sport are constrained to a an intercept of 0.5, representing the predictive accuracy of a “no-information” model before any games have been played. In developing prediction models for actual use, one might want to incorporate prior information on the home-field advantage based on previous seasons, but in our simple paired comparison models both team strengths and the home-field advantage are estimated purely from current-season data. Hence, prior to any games being played these models can perform no better than flipping a fair coin. The columns of Table 2 correspond to the estimated rate of increase in predictive accuracy, on a percentage point per game scale, over 25%, 37.5%, 50% and 87.5% of the season.

25% of games 37.5% of games 50% of games 87.5% of games
NBA 0.91 0.69 0.55 0.34
NFL 2.6 2.3 2 1.4
NHL 0.29 0.23 0.19 0.13
MLB 0.12 0.094 0.079 0.053
Table 2: Estimated per-game percentage point increase in predictive accuracy of a margin-of-victory model for the four U.S. sports leagues, by percentage games used to train the model.

The results in Table 2 allow us to compute a “per-game informativeness ratio” between pairs of sports. For example, considering the last column allows us to estimate that, over the course the season, NFL games are approximately 4 times more informative than NBA games, which are in turn about 2-3 times more informative than NHL games, which are themselves approximately 2-3 times more informative than MLB games. The “informativeness ratio” of NFL to MLB games is on the order of 65, or about 6 times larger than the inverse ratio of their respective season lengths (162/16 10). In contrast, the ratio comparing NFL to NBA games ( 4) is slightly smaller than the inverse ratio of their respective season lengths (82/16 5).

4 Conclusions and discussion

Our results reveal substantial differences between the major North American sports according to how well one is able to discern team strengths using game results from a single season. NBA games are most easily predicted, with paired comparison models having good predictive accuracy even early in the season; indeed, since our information metric for the NBA appears to plateau around game 30, an argument could be made that the latter half of the NBA season could be eliminated without substantially affecting the ability to identify the teams most deserving of a playoff spot. NFL game results also give useful information for determining relative team strength. On a per-game basis, NFL contests contain the largest amount of information. With the exception of the 2008 season, there was no obvious “information plateau” in the NFL, though the rate of increase in information did appear to slow somewhat after the first 5 games. These results suggest that games in the latter part of the NFL season contribute useful information in determining who the best teams are.

The predictive ability of paired comparison models constructed from MLB and NHL game data remains limited even when results from a large number of games are used. One interpretation of this finding is that, in comparison to the NBA and NFL, games in MLB and the NHL carry little information about relative team strength. Our results may also reflect smaller variance in team strengths (i.e., greater parity) in hockey and baseball: Because our information metric considers the predictive accuracy averaged across all games in the test set, if most games are played between opposing teams of roughly the same strength then most predictive models will fare poorly. Indeed, the inter-quartile range for winning percentage in these sports is typically on the order of 20%, while in football and basketball it is closer to 30%. Our observation that the hockey and baseball regular seasons do relatively little to distinguish between teams’ abilities is reflected in playoff results, where “upsets” of top-seeded teams by teams who barely qualified for the postseason happen much more regularly in the NHL and MLB than in the NFL and NBA. One possible extension of this work would be to quantify this effect more formally.

Indeed, given the relative inability of predictive models to distinguish between MLB teams upon completion of the regular season, a compelling argument could be made for increasing the number of teams that qualify for the MLB playoffs since the current 10-team format is likely to exclude teams of equal or greater ability than ones that make it. Using similar logic, one might also argue that if the goal of the playoffs is to identify the best team (admittedly an oversimplification), then perhaps the NBA playoffs are overly inclusive as there is ample information contained in regular season game outcomes to distinguish between the best teams and those that are merely average.

More surprising to us was the enormous discrepancy in the informativeness of game results between hockey and basketball, which both currently play seasons of the same length but perhaps ought not to. One possible explanation for why basketball game results more reliably reflect team strength is that a large number of baskets are scored, and the Law of Large Numbers dictates that each team approaches their “true” ability level more closely. In contrast, NHL games are typically low-scoring affairs, further compounded by the fact that a large fraction of goals are scored on broken plays and deflections which seem to be strongly influenced by chance. We have not analyzed data from soccer, but it would be interesting to explore whether the “uninformativeness” of hockey and baseball game results extends to other low-scoring sports.

Our analysis has several limitations. First, we chose to quantify information via the predictive accuracy of simple paired comparison models. It is possible that using more sophisticated models for prediction might change our conclusions, though we doubt it would erase the sizable between-sport differences that we observed. Indeed, as mentioned above, accounting for starting pitcher effects in our MLB prediction model did not substantially affect the results. Second, it could be argued that team win probabilities change over the course of a season due to roster turnover, injuries, and other effects. By randomly assigning games to our training and test set without regard to their temporal ordering, we are implicitly estimating “average” team strengths over the season, and applying these to predict the outcome of an “average” game. We chose a random sampling approach over one which would simply split the season because we wanted to eliminate time trends in team strengths when describing how information accrued as more game results were observed. While our approach does not directly describe how predictive accuracy improves as games are played in their scheduled order, we anticipate that the patterns would be similar to what we observed.

References

  1. Alan Agresti. Categorical Data Analysis. Wiley series in probability and statistics. John Wiley & Sons, 2 edition, July 2002. ISBN 0-471-36093-7.
  2. E Ben-Naim, F Vazquez, and S Redner. Parity and predictability of competitions. Quantitative Analysis in Sports, 2(4):1–12, 2006.
  3. RA Bradley and ME Terry. Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika, pages 324–345, 1952. URL http://www.jstor.org/stable/2334029.
  4. LP Cain and DD Haddock. Research Notes: Measuring Parity Tying Into the Idealized Standard Deviation. Journal of Sports Economics, 7(3):330–338, 2006. URL http://jse.sagepub.com/content/7/3/330.short.
  5. Doug Drinen. Pro-Football-Reference.com - Pro Football Statistics and History, 2014. URL http://www.pro-football-reference.com.
  6. WA Hamlen. Deviations from equity and parity in the National Football League. Journal of Sports Economics, 8(6):596–615, 2007. URL http://jse.sagepub.com/content/8/6/596.short.
  7. I Horowitz. The increasing competitive balance in Major League Baseball. Review of Industrial Organization, 12(3):373–387, 1997. URL http://link.springer.com/article/10.1023/A:1007799730191.
  8. S Késenne. Revenue sharing and competitive balance in professional team sports. Journal of Sports Economics, 1(1):56–65, 2000. URL http://jse.sagepub.com/content/1/1/56.short.
  9. Kenneth J Koehler and Harold Ridpath. An application of a biased version of the Bradley-Terry-Luce model to professional basketball results. Journal of Mathematical Psychology, 25(3):187–205, 1982.
  10. Joseph S Koopmeiners. A comparison of the autocorrelation and variance of nfl team strengths over time using a bayesian state-space model. Journal of Quantitative Analysis in Sports, 8(3), 2012.
  11. Paul Kuharsky. Texans get a lesson in ’what it takes’, 2012. URL http://espn.go.com/blog/afcsouth/post/_/id/44713/texans-get-a-lesson-in-what-it-takes.
  12. A Larsen, AJ Fenn, and EL Spenner. The impact of free agency and the salary cap on competitive balance in the National Football League. Journal of Sports Economics, 7(4):374–390, 2006. URL http://jse.sagepub.com/content/7/4/374.short.
  13. T Lee. Competitive balance in the national football league after the 1993 collective bargaining agreement. Journal of Sports Economics, 11(1):77–88, 2010. URL http://jse.sagepub.com/content/11/1/77.short.
  14. Jackie MacMullan. Patriots’ defense comes of age, 2012. URL http://espn.go.com/boston/nfl/story/_/id/8735498/new-england-patriots-defense-comes-age-counts.
  15. Donald E K Martin. Paired comparison models applied to the design of the Major League baseball play-offs. Journal of Applied Statistics, 26(1):69–80, 1999.
  16. Ian McHale and Alex Morton. A Bradley-Terry type model for forecasting tennis match results. International Journal of Forecasting, 27(2):619–630, 2011.
  17. D Mizak, A Stair, and A Rossi. Assessing alternative competitive balance measures for sports leagues: a theoretical examination of standard deviations, gini coefficients, the index of. Economics Bulletin, 12(5):1–11, 2005. URL http://www.accessecon.com/pubs/EB/2005/Volume12/EB-04L80002A.pdf.
  18. Vito M.R. Muggeo. Estimating regression models with unknown break-points. Statistics in Medicine, 22:3055–3071, 2003.
  19. Vito M.R. Muggeo. segmented: an r package to fit regression models with broken-line relationships. R News, 8(1):20–25, 2008. URL http://cran.r-project.org/doc/Rnews/.
  20. PD Owen. Limitations of the relative standard deviation of win percentages for measuring competitive balance in sports leagues. Economics Letters, 109(1):38–41, 2010. URL http://www.sciencedirect.com/science/article/pii/S0165176510002648.
  21. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2013. URL http://www.R-project.org/.
  22. Mike Reiss. Tom Brady shows MVP chops, 2012. URL http://espn.go.com/boston/nfl/story/_/id/8735476/tom-brady-looks-mvp-worthy-romp-houston-texans.
  23. Clément Sire and Sidney Redner. Understanding baseball team standings and streaks. The European Physical Journal B, 67(3):473–481, 2009.
  24. David Smith. Retrosheet.org, 2014. URL http://retrosheet.org.
  25. J Vrooman. A general theory of professional sports leagues. Southern Economic Journal, pages 971–990, 1995. URL http://www.jstor.org/stable/1060735.
  26. James Walker. Patriots are the new Super Bowl favorites, 2012. URL http://espn.go.com/blog/afceast/post/_/id/52084/patriots-are-the-new-super-bowl-favorites.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
224054
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description