Modeling outcomes of soccer matches
Abstract
We compare various extensions of the BradleyTerry model and a
hierarchical Poisson loglinear model in terms of their performance
in predicting the outcome of soccer matches (win, draw, or
loss). The parameters of the BradleyTerry extensions are estimated
by maximizing the loglikelihood, or an appropriately penalized
version of it, while the posterior densities of the parameters of
the hierarchical Poisson loglinear model are approximated using
integrated nested Laplace approximations. The prediction performance
of the various modeling approaches is assessed using a novel,
contextspecific framework for temporal validation that is found to
deliver accurate estimates of the test error. The direct modeling
of outcomes via the various BradleyTerry extensions and the
modeling of match scores using the hierarchical Poisson loglinear
model demonstrate similar behavior in terms of predictive
performance.
Keywords: BradleyTerry model; Poisson loglinear hierarchical model; Maximum penalized likelihood; Integrated Nested Laplace Approximation; Temporal validation
1 Introduction
The current paper stems from our participation in the 2017 Machine Learning Journal (Springer) challenge on predicting outcomes of soccer matches from a range of leagues around the world (MLS challenge, in short). Details of the challenge and the data can be found in Berrar et al. (2017).
We consider two distinct modeling approaches for the task. The first approach focuses on modeling the probabilities of win, draw, or loss, using various extensions of BradleyTerry models (Bradley and Terry, 1952). The second approach focuses on directly modeling the number of goals scored by each team in each match using a hierarchical Poisson loglinear model, building on the modeling frameworks in Maher (1982), Dixon and Coles (1997), Karlis and Ntzoufras (2003) and Baio and Blangiardo (2010).
The performance of the various modeling approaches in predicting the outcomes of matches is assessed using a novel, contextspecific framework for temporal validation that is found to deliver accurate estimates of the prediction error. The direct modeling of the outcomes using the various BradleyTerry extensions and the modeling of match scores using the hierarchical Poisson loglinear model deliver similar performance in terms of predicting the outcome.
The paper is structured as follows: Section 2 briefly introduces the data, presents the necessary datacleaning operations undertaken, and describes the various features that were extracted. Section 3 presents the various BradleyTerry models and extensions we consider for the challenge and describes the associated estimation procedures. Section 4 focuses on the hierarchical Poisson loglinear model and the Integrated Nested Laplace Approximations (INLA; Rue et al. 2009) of the posterior densities for the model parameters. Section 5 introduces the validation framework and the models are compared in terms of their predictive performance in Section 6. Section 7 concludes with discussion and future directions.
2 Preprocessing and feature extraction
2.1 Data exploration
The data contain matches from leagues, covering countries, for a varying number of seasons for each league. Nearly all leagues have data since 2008, with a few having data extending as far back as 2000. There are no crosscountry leagues (e.g. UEFA Champions League) or teams associated with different countries. The only way that teams move between leagues is within each country by either promotion or relegation.
Figure 1 shows the number of available matches for each country in the data set. England dominates the data in terms of matches recorded, with the available matches coming from 5 distinct leagues. The other highlyrepresented countries are Scotland with data from 4 distinct leagues, and European countries, such as Spain, Germany, Italy and France, most probably because they also have a high UEFA coefficient (Wikipedia, 2018).
Figure 2 shows the number of matches per number of goals scored by the home (dark grey) and away (light grey) teams. Home teams appear to score more goals than away teams, with home teams having consistently higher frequencies for two or more goals and away teams having higher frequencies for no goal and one goal. Overall, home teams scored goals over the whole data set, whereas away teams scored goals. In Section 1 of the Supplementary Material, the trend shown in Figure 2 is also found to be present within each country, pointing towards the existence of a home advantage.
2.2 Data cleaning
Upon closer inspection of the original sequence of matches for the MLS challenge, we found and corrected the following three anomalies in the data. The complete set of matches from the 20152016 season of the Venezuelan league was duplicated in the data. We kept only one instance of these matches. Furthermore, matches from the 20132014 season of the Norwegian league were assigned the year 2014 in the date field instead of 2013. The dates for these matches were modified accordingly. Finally, one match in the 20132014 season of the Moroccan league (Raja Casablanca vs Maghrib de Fes) was assigned the month February in the date field instead of August. The date for this match was corrected, accordingly.
2.3 Feature extraction
The features that were extracted can be categorized into teamspecific, matchspecific and/or seasonspecific. Matchspecific features were derived from the information available on each match. Seasonspecific features have the same value for all matches and teams in a season of a particular league, and differ only across seasons for the same league and across leagues.
Table 1 gives short names, descriptions, and ranges for the features that were extracted. Table 2 gives an example of what values the features take for an artificial data set with observations on the first 3 matches of a season for team A playing all matches at home. The teamspecific features are listed only for Team A to allow for easy tracking of their evolution.
The features we extracted are proxies for a range of aspects of the game, and their choice was based on common sense and our understanding of what is important in soccer, and previous literature. Home (feature 1 in Table 2) can be used for including a home advantage in the models; newly promoted (feature 2 in Table 2) is used to account for the fact that a newly promoted team is typically weaker than the competition; days since previous match (feature 3 in Table 2) carries information regarding fatigue of the players and the team, overall; form (feature 4 in Table 2) is a proxy for whether a team is doing better or worse during a particular period in time compared to its general strength; matches played (feature 5 in Table 2) determines how far into the season a game occurs; points tally, goal difference, and points per match (features 6, 7 and 10 in Table 2) are measures of how well a team is doing so far in the season; goals scored per match and goals conceded per match (features 8 and 9 in Table 2) are measures of a team’s attacking and defensive ability, respectively; previous season points tally and previous season goal difference (features 11 and 12 in Table 2) are measures of how well a team performed in the previous season, which can be a useful indicator of how well a team will perform in the early stages of a season when other features such as points tally do not carry much information; finally, team rankings (feature 13 in Table 2) refers to a variety of measures that rank teams based on their performance in previous matches, as detailed in Section 2 of the Supplementary Material.
In order to avoid missing data in the features we extracted, we made the following conventions. The value of form for the first match of the season for each team was drawn from a Uniform distribution in . The form for the second and third match were a third of the points in the first match, and a sixth of the total points in the first two matches, respectively. Days since previous match was left unspecified for the very first match of the team in the data. If the team was playing its first season then we treated it as being newly promoted. The previous season points tally was set to for newly promoted teams and to for newly relegated teams, and the previous season goal difference was set to for newly promoted teams and for newly relegated teams. These values were set in an adhoc manner prior to estimation and validation, based on our sense and experience of what is a small or large value for the corresponding features. In principle, the choice of these values could be made more formally by minimizing a criterion of predictive quality, but we did not pursue this as it would complicate the estimationprediction workflow described later in the paper and increase computational effort significantly without any guarantee of improving the predictive quality of the models.
Number  Short name  Description  Range 
Teamspecific features  
1  Home 
if the team is playing at home, and otherwise 

2  Newly promoted 
if the team is newly promoted to the league for the current season, and otherwise 

3  Days since previous match 
number of days elapsed since the previous match of the team 

4  Form 
a ninth of the total points gained in the last three matches in the current season 

5  Matches played 
number of matches played in the current season and before the current match 

6  Points tally 
the points accumulated during the current season and before the current match 

7  Goal difference 
the goals that a team has scored minus the goals that it has conceded over the current season and before the current match 

8  Goals scored per match 
total goals scored per match played over the current season and before the current match 

9  Goals conceded per match 
total goals conceded per match over the current season and before the current match 

10  Points per match 
total points gained per match played over the current season and before the current match 

11  Previous season points tally 
total points accumulated by the team in the previous season of the same league 

12  Previous season goal difference 
total goals scored minus total goals conceded for each team in the previous season of the same league 

13  Team rankings 
a variety of team rankings, based on historical observations; See Section 2 of the Supplementary Material 

Seasonspecific features  
14  Season 
the league season in which each match is played 
labels 
15  Season window 
time period in calendar months of the league season 
labels 
Matchspecific features  
16  Quarter 
quarter of the calendar year based on the match date 
labels 
Match 1  Match 2  Match 3  
Match attributes and outcomes  
League  Country1  Country1  Country1 
Date  20330818  20330821  20330826 
Home team  team A  team A  team A 
Away team  team B  team C  team D 
Home score  2  2  0 
Away score  0  1  0 
Teamspecific features (Team A)  
Newly promoted  0  0  0 
Days since previous match  91  3  5 
Form  0.5233  1  1 
Matches played  0  1  2 
Points tally  0  3  6 
Goal difference    2  3 
Goals scored per match  0  2  2 
Goals conceded per match  0  0  0.5 
Points per match  0  3  3 
Previous season points tally  72  72  72 
Previous Season goal difference  45  45  45 
Seasonspecific features  
Season  3334  3334  3334 
Season window  AugustMay  AugustMay  AugustMay 
Matchspecific features  
Quarter  3  3  3 
3 Modeling outcomes
3.1 BradleyTerry models and extensions
The BradleyTerry model (Bradley and Terry, 1952) is commonly used to model paired comparisons, which often arise in competitive sport. For a binary win/loss outcome, let
where is the number of teams present in the data. The BradleyTerry model assumes that
where , and is understood as the “strength” of team . In the original BradleyTerry formulation, does not vary with time.
For the purposes of the MLS challenge prediction task, we consider extensions of the original BradleyTerry formulation where we allow to depend on a vector of timedependent features for team at time as for some function . BradleyTerry models can also be equivalently written as linking the logodds of a team winning to the difference in strength of the two teams competing. Some of the extensions below directly specify that difference.
BL: Baseline
The simplest specification of all assumes that
(1) 
where if team is playing at home at time , and otherwise. The only parameter to estimate with this specification is , which can be understood as the difference in strength when the team plays at home. We use this model to establish a baseline to improve upon for the prediction task.
CS: Constant strengths
This specification corresponds to the standard BradleyTerry model with a homefield advantage, under which
(2) 
The above specification involves parameters, where is the number of teams. The parameter represents the timeinvariant strength of the th team.
LF: Linear with features
Suppose now that we are given a vector of features associated with team at time . A simple way to model the team strengths is to assume that they are a linear combination of the features. Hence, in this model we have
(3) 
where is the th element of the feature vector .
Note that the coefficients in the linear combination are shared between all teams, and so the number of parameters to estimate is , where is the dimension of the feature vector. This specification is similar to the one implemented in the R package BradleyTerry (Firth, 2005), but without the team specific random effects.
TVC: Timevarying coefficients
Some of the features we consider, like points tally season (feature 6 in Table 1) vary during the season. Ignoring any special circumstances such as teams being punished, the points accumulated by a team is a nondecreasing function of the number of matches the team has played.
It is natural to assume that the contribution of points accumulated to the strength of a team is different at the beginning of the season than it is at the end. In order to account for such effects, the parameters for the corresponding features can be allowed to vary with the matches played. Specifically, the team strengths can be modeled as
(4) 
where denotes the number of matches that team has played within the current season at time and denotes the set of coefficients that are allowed to vary with the matches played. The functions can be modeled nonparametrically, but in the spirit of keeping the complexity low we instead set . With this specification for , TVC is equivalent to LF with the inclusion of an extra set of features .
AFD: Additive feature differences with time interactions
For the LF specification, the logodds of team beating team is
Hence, the LF specification assumes that the difference in strength between the two teams is a linear combination of differences between the features of the teams. We can relax the assumption of linearity, and include nonlinear time interactions, by instead assuming that each difference in features contributes to the difference in strengths through an arbitrary bivariate smooth function that depends on the feature difference and the number of matches played. We then arrive at the AFD specification, which can be written as
(5) 
where for simplicity we take the number of matches played to be the number of matches played by the home team.
3.2 Handling draws
The extra outcome of a draw in a soccer match can be accommodated within the BradleyTerry formulation in two ways.
The first is to treat win, loss and draw as multinomial ordered outcomes, in effect assuming that , where denotes strong transitive preference. Then, the ordered outcomes can be modeled using cumulative link models (Agresti, 2015) with the various strength specifications. Specifically, let
and assume that has
(6) 
where , and are parameters to be estimated from the data. Cattelan et al. (2013) and Király and Qian (2017) use of this approach for modeling soccer outcomes.
Another possibility for handling draws is to use the Davidson (1970) extension of the BradleyTerry model, under which
where is a parameter to be estimated from the data.
3.3 Estimation
Likelihoodbased approaches
The parameters of the BradleyTerry model extensions presented above can be estimated by maximizing the loglikelihood of the multinomial distribution.
The loglikelihood about the parameter vector is
where takes the value if holds and otherwise, and is the set of triplets corresponding to the matches whose outcomes have been observed.
For estimating the functions involved in the AFD specification, we represent each using thin plate splines (Wahba, 1990), and enforce smoothness constraints on the estimate of by maximizing a penalized loglikelihood of the form
where is a penalty matrix and is a tuning parameter. For penalized estimation we only consider ordinal models through the R package mgcv (Wood, 2006), and select by optimizing the Generalized Cross Validation criterion (Golub et al., 1979). Details on the fitting procedure for specifications like AFD and the implementation of thin plate spline regression in mgcv can be found in Wood (2003).
The parameters of the Davidson extensions of the BradleyTerry model are estimated by using the BFGS optimization algorithm (Byrd et al., 1995) to minimize .
Identifiability
In the CS model, the team strengths are identifiable only up to an additive constant, because for any . This unidentifiability can be dealt with by setting the strength of an arbitrarily chosen team to zero. The CS model was fitted leaguebyleague with one identifiability constraint per league.
The parameters and in (6) are identifiable only if the specification used for does not involve an intercept parameter. An alternative is to include an intercept parameter in and fix at a value. The estimated probabilities are invariant to these alternatives, and we use the latter simply because this is the default in the mgcv package.
Other dataspecific considerations
The parameters in the LF, TVC, and AFD specifications (which involve features) are shared across the leagues and matches in the data. For computational efficiency we restrict the fitting procedures to use the most recent matches, or less if less is available, at the time of the first match that a prediction needs to be made. The CS specification requires estimating the strength parameters directly. For computational efficiency, we estimate the strength parameters independently for each league within each country, and only consider matches that took place in the past calendar year from the date of the first match that a prediction needs to be made.
4 Modeling scores
4.1 Model structure
Every league consists of a number of teams , playing against each other twice in a season (once at home and once away). We indicate the number of goals scored by the home and the away team in the th match of the season () as and , respectively.
The observed goal counts and are assumed to be realizations of conditionally independent random variables and , respectively, with
The parameters and represent the scoring intensity in the th match for the home and away team, respectively.
We assume that and are specified through the regression structures
(7) 
The indices and determine the home and away team for match respectively, with . The parameters represent the effects corresponding to the observed match and teamspecific features , respectively, collected in a matrix . The other effects in the linear predictor reflect assumptions of exchangeability across the teams involved in the matches. Specifically, and represent the latent attacking and defensive ability of team and are assumed to be distributed as
We used vague logGamma priors on the precision parameters and . In order to account for the time dynamics across the different seasons, we also include the latent interactions and between the teamspecific attacking and defensive strengths and the season , which were modeled using autoregressive specifications with
and
For the specification of prior distributions for the hyperparameters we used the default settings of the RINLA package (Lindgren and Rue, 2015, version 17.6.20), which we also use to fit the model (see Subsection 4.2). Specifically, RINLA sets vague Normal priors (centred at with large variance) on suitable transformations (e.g. log) of the hyperparameters with unbounded range.
4.2 Estimation
The hierarchical Poisson loglinear model (HPL) of Subsection 4.1 was fitted using INLA (Rue et al., 2009). Specifically, INLA avoids timeconsuming MCMC simulations by numerically approximating the posterior densities for the parameters of latent Gaussian models, which constitute a wide class of hierarchical models of the form
where is the random variable corresponding to the observed response , is a set of parameters (which may have a large dimension) and is a set of hyperparameters.
The basic principle is to approximate the posterior densities for and using a series of nested Normal approximations. The algorithm uses numerical optimization to find the mode of the posterior, while the marginal posterior distributions are computed using numerical integration over the hyperparameters. The posterior densities for the parameters of the HPL model are computed on the available data for each league.
To predict the outcome of a future match, we simulated samples from the joint approximated predictive distribution of the number of goals , , scored in the future match by the home and away teams respectively, given features . Sampling was done using the inla.posterior.sample method of the RINLA package. The predictive distribution has a probability mass function of the form
where the vector collects all model parameters. We then compute the relative frequencies of the events , , and , which correspond to home win, draw, and loss respectively.
5 Validation framework
5.1 MLS challenge
The MLS challenge consists of predicting the outcomes (win, draw, loss) of 206 soccer matches from 52 leagues that take place between 31st March 2017 and 10th April 2017. The prediction performance of each submission was assessed in terms of the average ranked probability score (see Subsection 5.2) over those matches. To predict the outcomes of these matches, the challenge participants have access to over 200,000 matches up to and including the 21st March 2017, which can be used to train a classifier.
In order to guide the choice of the model that is best suited to make the final predictions, we designed a validation framework that emulates the requirements of the MLS Challenge. We evaluated the models in terms of the quality of future predictions, i.e. predictions about matches that happen after the matches used for training. In particular, we estimated the model parameters using data from the period before 1st April of each available calendar year in the data, and examined the quality of predictions in the period between 1st and 7th April of that year. For 2017, we estimated the model parameters using data from the period before 14th March 2017, and examined the quality of predictions in the period between 14th and 21st March 2017. Figure 3 is a pictorial representation of the validation framework, illustrating the sequence of experiments and the duration of their corresponding training and validation periods.
5.2 Validation criteria
The main predictive criterion we used in the validation framework is the ranked probability score, which is also the criterion that was used to determine the outcome of the challenge. Classification accuracy was also computed.
Ranked probability score
Let be the number of possible outcomes (e.g. in soccer) and be the vector of predicted probabilities with th component and . Suppose that the observed outcomes are encoded in an vector with th component and . The ranked probability score is defined as
(8) 
The ranked probability score was introduced by Epstein (1969) (see also, Gneiting and Raftery, 2007, for a general review of scoring rules) and is a strictly proper probabilistic scoring rule, in the sense that the true odds minimize its expected value (Murphy, 1969).
Classification accuracy
Classification accuracy measures how often the classifier makes the correct prediction, i.e. how many times the outcome with the maximum estimated probability of occurence actually occurs.
Observed outcome  Predicted probabilities  Predicted outcome  RPS  Accuracy  
1  0  0  1  0  0  1  0  0  0  1 
1  0  0  0  1  0  0  1  0  0.5  0 
1  0  0  0  0  1  0  0  1  1  0 
1  0  0  0.8  0.2  0  1  0  0  0.02  1 
0  1  0  0.33  0.33  0.34  0  0  1  0.11  0 
Table 3 illustrates the calculations leading to the ranked probability score and classification accuracy for several combinations of and . The leftmost group of three columns gives the observed outcomes, the next group gives the predicted outcome probabilities, and the third gives the predicted outcomes using maximum probability allocation. The two columns in the right give the ranked probability scores and classification accuracies. As shown, a ranked probability score of zero indicates a perfect prediction (minimum error) and a ranked probability score of one indicates a completely wrong prediction (maximum error).
The ranked probability score and classification accuracy for a particular experiment in the validation framework are computed by averaging over their respective values over the matches in the prediction set. The uncertainty in the estimates from each experiment is quantified using leaveonematch out jackknife (Efron, 1982), as detailed in step 9 of Algorithm 1.
5.3 Metaanalysis
The proposed validation framework consists of experiments, one for each calendar year in the data. Each experiment results in pairs of observations , where is the ranked probability score or classification accuracy from the th experiment, and is the associated jackknife estimate of its variance .
We synthesized the results of the experiments using metaanalysis (DerSimonian and Laird, 1986). Specifically, we make the working assumptions that the summary variances are estimated wellenough to be considered as known, and that are realizations of random variables , respectively, which are independent conditionally on independent random effects , with
and
The parameter is understood here as the overall ranked probability score or classification accuracy, after accounting for the heterogeneity between the experiments.
The maximum likelihood estimate of the overall ranked probability or classification accuracy is then the weighted average
where and is the maximum likelihood estimate of . The estimated standard error for the estimator of the overall score can be computed using the square root of the inverse Fisher information about , which ends up being .
The assumptions of the randomeffects metaanalysis model (independence, normality and fixed variances) are all subject to direct criticism for the validation framework depicted in Figure 3 and the criteria we consider; for example, the training and validation sets defined in the sequence of experiments in Figure 3 are overlapping and ordered in time, so the summaries resulting from the experiment are generally correlated. We proceed under the assumption that these departures are not severe enough to influence inference and conclusions about .
5.4 Implementation
Algorithm 1 is an implementation of the validation framework in pseudocode. Each model is expected to have a training method which trains the model on data, and a prediction method which returns predicted outcome probabilities for the prediction set. We refer to these methods as train and predict in the pseudocode.
6 Results
In this section we compare the predictive performance of the various models we implemented as measured by the validation framework described in Section 5. Table 4 gives the details of each model in terms of features used, the handling of draws (ordinal and Davidson, as in Subsection 3.2), the distribution whose parameters are modeled, and the estimation procedure that has been used.
The sets of features that were used in the LF, TVC, AFD and HPL specifications in Table 4 resulted from adhoc experimentation with different combinations of features in the LF specification. All instances of feature 13 refer to the least squares ordinal rank (see Subsection 2.5 of the supplementary material). The features used in the HPL specification in (7) have been chosen prior to fitting to be home and newly promoted (features 1 and 2 in Table 1), the difference in form and points tally (features 4 and 6 in Table 1) between the two teams competing in match , and season and quarter (features 15 and 16 in Table 1) for the season that match takes place.
Model  Draws  Features  Distribution  Estimation 

BL (1)  Davidson  1  Multinomial  ML 
BL (1)  Ordinal  1  Multinomial  ML 
CS (2)  Davidson  1  Multinomial  ML 
CS (2)  Ordinal  1  Multinomial  ML 
LF (3)  Davidson  1, 6, 7, 12, 13  Multinomial  ML 
LF (3)  Ordinal  1, 6, 7, 12, 13  Multinomial  ML 
TVC (4)  Davidson  1, 6, 7, 12, 13  Multinomial  ML 
TVC (4)  Ordinal  1, 6, 7, 12, 13  Multinomial  ML 
AFD (5)  Davidson  1, 6, 7, 12, 13  Multinomial  MPL 
HPL (7)  1, 2, 4, 6, 15, 16  Poisson  INLA  
() TVC (4)  Ordinal  1, 2, 3, 4, 6, 7, 11  Multinomial  ML 
For each of the models in Table 4, Table 5 presents the ranked probability score and classification accuracy as estimated from the validation framework in Algorithm 1, and as calculated for the matches in the test set for the challenge.
The results in Table 5 are indicative of the good properties of the validation framework of Section 5 in accurately estimating the performance of the classifier on unseen data. Specifically, and excluding the baseline model, the sample correlation between overall ranked probability score and the average ranked probability score from the matches on the test set is . The classification accuracy seems to be underestimated by the validation framework.
The TVC model that is indicated by in Table 5 is the model we used to compute the probabilities for our submission to the MLS challenge. Figure 4 shows the estimated timevarying coefficients for the TVC model. The remaining parameter estimates are for the coefficient of form, for the coefficient of days since previous match, and for the coefficient of newly promoted. Of all the features included, only goal difference and point tally last season had coefficients for which we found evidence of difference from zero when accounting for all other parameters in the model (the values from individual Wald tests are both less than ).
After the completion of the MLS challenge we explored the potential of new models and achieved even smaller ranked probability scores than the one obtained from the TVC model. In particular, the best performing model is the HPL model in Subsection 4.1 (starred in Table 5), followed by the AFD model which achieves a marginally worse ranked probability score. It should be noted here that the LF models are two simpler models that achieve performance that is close to that of HPL and AFD, without the inclusion of random effects, timevarying coefficients, or any nonparametric specifications.
The direct comparison between the ordinal and Davidson extensions of BradleyTerry type models indicates that the differences tend to be small, with the Davidson extensions appearing to perform better.
We also tested the performance of HPL in terms of predicting actual scores of matches using the validation framework, comparing to a baseline method that always predicts the average goals scored by home and away teams respectively in the training data it receives. Using root mean square error as an evaluation metric, HPL achieved a score of 1.0011 with estimated standard error 0.0077 compared to the baseline which achieved a score of 1.0331 with estimated standard error 0.0083.
Ranked probability score  Accuracy  

Model  Draws  Validation  Test  Validation  Test  
BL  Davidson  0.2242  (0.0024)  0.2261  0.4472  (0.0067)  0.4515  
BL  Ordinal  0.2242  (0.0024)  0.2261  0.4472  (0.0067)  0.4515  
CS  Davidson  0.2112  (0.0028)  0.2128  0.4829  (0.0073)  0.5194  
CS  Ordinal  0.2114  (0.0028)  0.2129  0.4779  (0.0074)  0.4951  
LF  Davidson  0.2088  (0.0026)  0.2080  0.4849  (0.0068)  0.5049  
LF  Ordinal  0.2088  (0.0026)  0.2084  0.4847  (0.0068)  0.5146  
TVC  Davidson  0.2081  (0.0026)  0.2080  0.4898  (0.0068)  0.5049  
TVC  Ordinal  0.2083  (0.0025)  0.2080  0.4860  (0.0068)  0.5097  
AFD  Ordinal  0.2079  (0.0026)  0.2061  0.4837  (0.0068)  0.5194  
HPL  0.2073  (0.0025)  0.2047  0.4832  (0.0067)  0.5485  
TVC  Ordinal  0.2085  (0.0025)  0.2087  0.4865  (0.0068)  0.5388 
7 Conclusions and discussion
We compared the performance of various extensions of BradleyTerry models and a hierarchical loglinear Poisson model for the prediction of outcomes of soccer matches. The best performing BradleyTerry model and the hierachical loglinear Poisson model delivered similar performance, with the hierachical loglinear Poisson model doing marginally better.
Amongst the BradleyTerry specifications, the best performing one is AFD, which models strength differences through a semiparametric specification involving general smooth bivariate functions of features and season time. Similar but lower predictive performance was achieved by the BradleyTerry specification that models team strength in terms of linear functions of season time. Overall, the inclusion of features delivered better predictive performance than the simpler BradleyTerry specifications. In effect, information is gained by relaxing the assumption that each team has constant strength over the season and across feature values. The fact that the models with time varying components performed best within the BradleyTerry class of models indicates that enriching models with timevarying specifications can deliver substantial improvements in the prediction of soccer outcomes.
All models considered in this paper have been evaluated using a novel, contextspecific validation framework that accounts for the temporal dimension in the data and tests the methods under gradually increasing information for the training. The resulting experiments are then pooled together using metaanalysis in order to account for the differences in the uncertainty of the validation criterion values by weighing them accordingly.
The meta analysis model we employed operates under the working assumption of independence between the estimated validation criterion values from each experiment. This is at best a crude assumption in cases like the above where data for training may be shared between experiments. Furthermore, the validation framework was designed to explicitly estimate the performance of each method only for a prespecified window of time in each league, which we have set close to the window where the MLS challenge submissions were being evaluated. As a result, the conclusions we present are not generalizable beyond the specific time window that was considered. Despite these shortcomings, the results in Table 5 show that the validation framework delivered accurate estimates of the actual predictive performance of each method, as the estimated average predictive performances and the actual performances on the test set (containing matches between 31st March and 10th April, 2017) were very close.
The main focus of this paper is to provide a workflow for predicting soccer outcomes, and to propose various alternative models for the task. Additional feature engineering and selection, and alternative fitting strategies can potentially increase performance and are worth pursuing. For example, ensemble methods aimed at improving predictive accuracy like calibration, boosting, bagging, or model averaging (for an overview, see Dietterich, 2000) could be utilized to boost the performance of the classifiers that were trained in this paper.
A challenging aspect of modeling soccer outcomes is devising ways to borrow information across different leagues. The two best performing models (HPL and AFD) are extremes in this respect; HPL is trained on each league separately while AFD is trained on all leagues simultaneously, ignoring the league that teams belong to. Further improvements in predictive quality can potentially be achieved by using a hierarchical model that takes into account which league teams belong to but also allows for sharing of information between leagues.
8 Supplementary material
The supplementary material document contains two sections. Section 1 provides plots of the number of matches per number of goals scored by the home and away teams, by country, for a variety of arbitrarily chosen countries. These plots provide evidence of a home advantage. Section 2 details approaches for obtaining team rankings (feature 13 in Table 2) based on the outcomes of the matches they played so far.
9 Acknowledgements and authors’ contributions
This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1.
The authors are grateful to Petros Dellaportas, David Firth, István Papp, Ricardo Silva, and Zhaozhi Qian for helpful discussions during the challenge. Alkeos Tsokos, Santhosh Narayanan and Ioannis Kosmidis have defined the various BradleyTerry specifications, and devised and implemented the corresponding estimation procedures and the validation framework. Gianluca Baio developed the hierarchical Poisson loglinear model and the associated posterior inference procedures. Mihai Cucuringu did extensive work on feature extraction using ranking algorithms. Gavin Whitaker carried out core data wrangling tasks and, along with Franz Király, worked on the initial data exploration and helped with the design of the estimationprediction pipeline for the validation experiments. Franz Király also contributed to the organisation of the team meetings and communication during the challenge. All authors have discussed and provided feedback on all aspects of the challenge, manuscript preparation and relevant data analyses.
References
 Agresti (2015) Agresti, A. (2015). Foundations of Linear and Generalized Linear Models. Wiley.
 Baio and Blangiardo (2010) Baio, G. and Blangiardo, M. (2010). Bayesian hierarchical model for the prediction of football results. Journal of Applied Statistics, 37(2):253–264.
 Berrar et al. (2017) Berrar, D., Dubitzky, W., Davis, J., and Lopes, P. (2017). Machine Learning for Soccer. Retrieved from osf.io/ftuva.
 Bradley and Terry (1952) Bradley, R. A. and Terry, M. E. (1952). Rank Analysis of Incomplete Block Deisngs: I. The method of Paired Comparisons. Biometrika, 39(3/4):502–537.
 Byrd et al. (1995) Byrd, R. H., Lu, P., Nocedal, J., and Zhu, C. (1995). A limited memory algorithm for bound constrained optimization. SIAM J. Scientific Computing, 16.
 Cattelan et al. (2013) Cattelan, M., Varin, C., and Firth, D. (2013). Dynamic BradleyâTerry modelling of sports tournaments. Journal of the Royal Statistical Society: Series C (Applied Statistics), 62(1):135–150.
 Davidson (1970) Davidson, R. R. (1970). On Extending the BradleyTerry Model to Accommodate Ties in Paired Comparison Experiments. Journal of the American Stistical Association, 65(329).
 DerSimonian and Laird (1986) DerSimonian, R. and Laird, N. (1986). Metaanalysis in clinical trials. Control Clin Trials, 7(3).
 Dietterich (2000) Dietterich, T. G. (2000). Ensemble Methods in Machine Learning, pages 1–15. Springer Berlin Heidelberg, Berlin, Heidelberg.
 Dixon and Coles (1997) Dixon, M. J. and Coles, S. G. (1997). Modelling Association Football Scores and Inefficiencies in the Football Betting Market. Applied Statistics, 46(2).
 Efron (1982) Efron, B. (1982). The jackknife, the bootstrap and other resampling plans. SIAM.
 Epstein (1969) Epstein, E. S. (1969). A scoring system for probability forecasts of ranked categories. Journal of Applied Meteorology, 8(6):985–987.
 Firth (2005) Firth, D. (2005). BradleyTerry Models in R. Journal of Statistical Software, 12(1).
 Gneiting and Raftery (2007) Gneiting, T. and Raftery, A. E. (2007). Strictly Proper Scoring Rules, Prediction, and Estimation. Journal of the American Statistical Association, 102(477):359–378.
 Golub et al. (1979) Golub, G. H., Heath, M., and Wahba, G. (1979). Generalized CrossValidation as a Method for Choosing Good Ridge Parameter. Technometrics, 21(2).
 Karlis and Ntzoufras (2003) Karlis, D. and Ntzoufras, I. (2003). Analysis of sports data by using bivariate Poisson models. Journal of the Royal Statistical Society D, 52:381–393.
 Király and Qian (2017) Király, F. J. and Qian, Z. (2017). Modelling Competitive Sports: BradleyTerryÉlo Models for Supervised and OnLine Learning of Paired Competition Outcomes. preprint at arXiv:1701.08055, pages 1–53.
 Lindgren and Rue (2015) Lindgren, F. and Rue, H. (2015). Bayesian spatial modelling with rinla. Journal of Statistical Software, Articles, 63(19):1–25.
 Maher (1982) Maher, M. J. (1982). Modelling association football scores. Statistica Neerlandica, 36(3):109–118.
 Murphy (1969) Murphy, A. H. (1969). On the âranked probability scoreâ. Journal of Applied Meteorology, 8(6):988–989.
 Rue et al. (2009) Rue, H., Martino, S., and Chopin, N. (2009). Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. Journal of the Royal Statistical Society B, 71:319–392.
 Wahba (1990) Wahba, G. (1990). Spline Models for Observational Data. Society for Industrial and Applied Mathematics.
 Wikipedia (2018) Wikipedia (2018). UEFA coefficient — Wikipedia, the free encyclopedia. http://en.wikipedia.org/w/index.php?title=UEFA%20coefficient&oldid=819064849. [Online; accessed 09February2018].
 Wood (2003) Wood, S. N. (2003). Thin plate regression splines. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 65(1).
 Wood (2006) Wood, S. N. (2006). Generalized Additive Models: an introduction with R. CRC Press.
See pages  of supp_material.pdf