Optimal Combination Forecasts on Retail Multi-Dimensional Sales Data

Optimal Combination Forecasts on Retail Multi-Dimensional Sales Data


Time series data in the retail world are particularly rich in terms of dimensionality, and these dimensions can be aggregated in groups or hierarchies. Valuable information is nested in these complex structures, which helps to predict the aggregated time series data. From a portfolio of brands under HUUB’s monitoring, we selected two to explore their sales behaviour, leveraging the grouping properties of their product structure. Using statistical models, namely SARIMA, to forecast each level of the hierarchy, an optimal combination approach was used to generate more consistent forecasts in the higher levels. Our results show that the proposed methods can indeed capture nested information in the more granular series, helping to improve the forecast accuracy of the aggregated series. The Weighted Least Squares (WLS) method surpasses all other methods proposed in the study, including the Minimum Trace (MinT) reconciliation.

I Introduction

There are numerous potential benefits of using forecasting models on fashion companies, such as a reduction of the bullwhip effect, a possible reduction of the difficulties of the supplier production, an effectiveness improvement of the sourcing strategy of the retailer, the reduction of lost sales and markdowns and, consequently, the increase of profit margins, as described by [1].

Fashion markets are, however, volatile and hard to predict. [1] points out the different perspectives and requirements to bear in mind when developing a forecasting model to suit its specificities. There are two main forecast horizons to consider: medium term to plan sourcing and production and short term to plan replenishment. The products life cycle is very short in the fashion industry and there are some additional constraints such as ”one shot” supply [1]. This shows the relevance of forecasting in the purchase quantities determination. At the same time, there are basic and best selling items with a completely different life and purchasing cycles. Due to these short life cycles, historical data of the disaggregated product structure are ephemeral, hence the importance of forecasting different levels of aggregation. The strong seasonality also arises as one of the important behaviours to model. The sales data generated by companies is therefore highly dimensional, meaning that it is possible to aggregate and disaggregate it by a significant number of dimensions, e.g product structure, geography, etc.

Several studies show that forecasting the aggregates by disaggregates results in better forecasts than using the best individual models [2, 3, 4]. The most common approach to leverage the information present in each aggregation level, hierarchy or group, was initially to use the top-down or the bottom-up approaches. [5] proposed a new method for combining different levels coherently across the aggregation structure, resulting in what they designated optimal forecasts, which proved better than the standard methods. The work by [3] was one of the first applications of the hierarchical forecasting model and its focus was on tourism demand time series data. The data were disaggregated by geographical region and by purpose of travel, thus forming a natural hierarchy of time series to reconcile. The potential of this optimal combination method was demonstrated immediately in the first attempt, surpassing the bottom-up and all the top-down approaches, with the exception of the top-down approach based on forecast proportions. Later, [6] presented improvements in the method and proved that the optimal combination method actually outperformed all the traditional methods.

Another important discussion in the field is the comparison between statistical models and machine learning. [7] shared the findings on the M3 competition, where a large subset of 1045 monthly time series was used, and statistical methods dominated across all the accuracy measures and forecasting horizons. More recently, the M4 competition has shown the potential of hybrid and combination models, using both statistical and machine learning models, which dominated the top 10 of the competition [8].

Our work is focused on statistical models to build solid baseline forecasting models and optimally reconcile them using a hierarchical approach, but also sustained by the work being done in the field, where machine learning models still do not match the performance of statistical models when only a pure time series is provided.

Section II describes the theory and previous studies related to the methodologies implemented in this work. Section III lists the performance measures used to analyse and compare our models. In Section IV we present the dataset in study and in Section V we describe the methodology applied. Our results and discussion are detailed in Section VI and our conclusions and future work are summarised in Section VII.

Ii Related Work

Ii-a Data Pre-processing

Different approaches can be taken to pre-process time series data in order to stationarize, normalize, detrend and deseasonalize data. Pre-processing is an important stage as it has been shown that forecasting models perform better on pre-processed data [8]. Indeed, a time series will be easier to model accurately the more stationary and closer to a normal distribution it is. For a time series to be stationary, its properties, such as mean, variance and autocorrelation, should be constant over time. Various transformations can be applied to data for this purpose. The Box-Cox transformation is used to modify the distributional shape of a set of data to stabilize variance, make the data more normal distribution-like so that tests and confidence limits that require normality can be appropriately used. It is defined by:


where is the transformation power parameter and is a list of strictly positive numbers. Hence, the Box-Cox transformation can only be applied to positive data.

A Box-Cox transformation with is equivalent to a simple logarithmic transformation.

Ii-B Model description

The Autoregressive Integrated Moving Average (ARIMA) or Box Jenkins methodology is one of the most popular and frequently used stochastic time series models. The time series is assumed to be linear and follows a particular known statistical distribution, such as the normal distribution, which can make it inadequate in some practical situations [9]. However, its flexibility to represent several varieties of time series with simplicity and the optimization of the model building process render the Box-Jenkins methodology as a well-suited baseline approach [9].

In an AR model, the future value of a variable is assumed to be a linear combination of past observations and a random error together with a constant term [9]. Mathematically:


where is the model value; is the random error (or random shock) at time period ; are model coefficients and is a constant; is the order of the model, also called the lag.

The MA model uses past errors as the explanatory variables [9]. Mathematically:


where is the mean of the series; are the model parameters; is the order of the model. The random errors, or random shocks, are assumed to be white noise, i.e. independent and identically distributed (i.i.d.) random variables with zero mean and a constant variance [9].

ARMA is a combination of the autoregressive and moving average models. Matematically:


The ARMA model can only be used for stationary time series data. However, in practice many time series such as those related to socio-economy and business show non-stationary behaviour. Time series, which contain trend and seasonal patterns, are also non-stationary in nature. For this reason, the ARIMA model is proposed.

The ARIMA model is a generalization of an ARMA model to include the case of non-stationarity. The non-stationary time series are made stationary by applying finite differencing of the data points. Mathematically it can be written as:


where is the lag or backshift operator, defined as and


with , and being integers that refer to the order of the autoregressive, integrated and moving average parts of the model, respectively. The integer controls the level of differencing.

The Seasonal Autoregressive Integrated Moving Average (SARIMA) model is the generalization of the ARIMA model for seasonality. In this model, the seasonality is removed by seasonally differencing the series. In other words, by computing the difference between one observation and the corresponding observation from the previous year (or season): . For a monthly time series, , for a quarterly time series , etc.

Mathematically, a SARIMA model is formulated as:


where , , are the seasonal orders of the autoregressive, integrated and moving average parts of the model, respectively, and is the seasonal period.

Ii-C Model Evaluation

Good models are obtained by minimising the Akaike’s Information Criterion (AIC), the Corrected Akaike’s Information Criterion (AIC) or the Schwarz’s Bayesian Information Criterion (BIC). The BIC and the AIC were developed as a bias-corrected version of the AIC, better fit for short time series, where the AIC tends to select more complex models with too many predictors. [10] indicates its preference to use the AIC to compare models.

A note should be made about the fact that the AIC, AIC and BIC criteria should not be used to compare the performance of models with different differencing orders. The reason for this is that these criteria are all based on the likelihood of the model. This likelihood will be different for a series that is differenced and for a series that is not differenced. These criteria can only directly compare models with the same seasonal and first differencing orders.

If there are significant correlations between different lags of the data, or the residuals of a model, this indicates that there are still important features in the data that the model is not reproducing well and thus there is valid information to learn. [11] proposed the Ljung-Box (LB) test to evaluate the overall randomness of data based on a number of lags. It is commonly used to check if the residuals from a time series resemble white noise, i.e. if the relevant information was captured by the model. It is based on the statistic , which can be written as:


where is the length of the time series, is the autocorrelation coefficient of the residuals and is the number of lags to test. approximates a chi-squared distribution with degrees of freedom, where is the number of parameters of the estimated model. Large values of indicate that there are significant autocorrelations in the residual series.

The null hypothesis of the LB test states that the data are independently distributed, not showing serial correlation. A significant p-value in this test thus rejects the null hypothesis. Conversely, small values of p-value indicate the possibility of non-zero autocorrelations.

Ii-D Hierarchical and Grouped time series

Univariate Time Series data rely only on time to express patterns. When there is aggregated data available, there are potential covariance patterns nested in the hierarchy. [12] summarizes guidelines on using the more traditional hierarchical forecasting approaches, top-down, bottom-up and a combination of both, the middle-out approach. Among these three, top-down and middle-out rely on a unique hierarchy to assign the weights from the higher aggregated series to the lower ones, so it is not suitable for grouped time series. Grouped time series are built based on a structure that disaggregates based on factors that can be both nested and crossed [10].

A group can have several levels, starting on the most aggregated level of the data and disaggregating it by attributes, e.g. A and B, forming the series and , or by a second structure of attributes, e.g. X and Y, forming series and . At the bottom level, this would generate eight different series, , , , , , , and .

[5] proposed a new method to output more coherent forecasts, adding them up consistently within the aggregation structure. This method also proposes a generalized representation, from which the earlier approaches are obtained as special cases. As it is based on the reconciliation of the independent forecasts of all the aggregation levels, the reconciled forecasts are always more consistent than when a bottom-up strategy is used. This method is explored in more detail in the following section.

Ii-E Optimal Forecast Reconciliation

The method consists in optimally combining and reconciling all forecasts at all levels of the hierarchy. To combine the independent forecasts, a linear regression is used to guarantee that the revised forecasts are as close as possible to the independent forecasts while maintaining coherency. The independent forecasts are reconciled based on:


where is a matrix of -step-ahead independent forecasts for all series, stacked in the same order as the original data, is a summing matrix, is the unknown mean of the most disaggregated series at the bottom level, and represents the reconciliation errors. [5] also proved that the resulting forecasts are unbiased, which is not the case in the top-down approach.

[6] proposed a weighted least-squares (WLS) reconciliation approach, concluding that it outperforms the ordinary least squares (OLS). In their study, the only exception was the very top level where the OLS method did slightly better. The notation for the reconciliation approach was presented differently in [13], referring to the new method as the MinT (minimum trace) reconciliation, represented as:


where is the reconciled -step ahead forecast and is the variance-covariance matrix of the -step-ahead base forecast errors. As it is challenging to estimate , [13] proposed five different approximations for estimating this parameter, some of which we implemented in our work.

Besides the MinT approach, we considered the approximation case where it is assumed that , where and I is the identity matrix. In reality, this assumption collapses MinT to the early proposal of [5], to use a simple OLS estimator. This is a very strong assumption, where the base forecast errors are considered uncorrelated and equivariant, which is not possible to satisfy in hierarchical and grouped time series.

We also considered the case where MinT can be described as a WLS estimator [13], which can be written as , for all , where , and


where is a vector of the residuals of the models that generated the base forecasts stacked in the same order as the data. This method scales the base forecasts using the variance of the residuals, hence it works as a weighted least squares estimator using variance scaling.

Iii Performance Measures

Iii-a Mean Absolute Scaled Error (MASE)

MASE is a measurement of forecast accuracy which scales the errors based on the naïve forecast method. If MASE is smaller than 1, the forecast method is better than the average one-step naive forecast. MASE is scale-free so can be used to compare different forecast accuracies. Its mathematical expression is given by:


where is defined as:


where is the seasonal period and is the length of the time series.

Iii-B Root Mean Squared Error (RMSE)

RMSE is one of the most used metrics to evaluate forecast models. As it is not normalized, it can only be used to compare models in the same dataset. It is given by:


Iv Dataset

HUUB is a logistics company for the fashion industry. It offers a platform for fashion brands that integrates the full supply chain process in one place, simplifying operations from production to customer service. HUUB’s platform makes use of data to boost business growth and forecasting is a key aspect of its analysis. It has today over 50 brands under its monitoring, selling in 4 different channels: eCommerce, marketplaces, stores and retail.

The time series of all HUUB brands have been analyzed in order to explore their sales behaviour. Only brands with more than two years of data and consistent sales on a weekly basis were considered for forecasting analysis. For an initial study, we selected only the eCommerce sales channel of two of the largest and most representative brands, ranked in the top 10. The aim of this data reduction is to narrow the scope of the problem, since the purpose is to have a first approach and a strong baseline for the future. Furthermore, combining only forecasts of the two most prominent brands helps to improve the forecast performance.

Fig. 1: Time series profile of HUUB eCommerce selected brands.

Figure 1 plots the selected eCommerce brands data series. The data is aggregated in weeks and ranges from the 11-12-2016 (Sunday), which is the earliest week for which we have complete data of all series included in the analysis, to the 19-01-2019 (Saturday).

V Methods

V-a Models specification

[14] proposed a practical and pragmatic approach to build ARIMA models. [15] defined, based on that methodology, 4 iterative steps, for the univariate time series SARIMA model.

  1. Model identification: This step involves the selection of parameters to include in the model, both seasonal and non-seasonal, based on the Autocorrelations function (ACF) and the Partial Autocorrelations function (PACF).

  2. Parameter estimation: the previously identified parameters are then estimated using Least Squares or Maximum Likelihood and a first selection of models is conducted using information criteria.

  3. Diagnosis of the fitness of the model: The model is diagnosed using the statistical properties of the error terms, using Ljung-Box statistic test to check the adequacy. When the errors are normally distributed, we can move towards the next step.

  4. Forecasting and validation: The model is validated using out-of-sample data and applied to forecast the future values.

V-B Grouped Time Series

The structure used for this grouped time series starts at the top level with the time series that sums up all series at lower levels with different attributes. It is disaggregated by brand, brand 1 and brand 2, in a hierarchical structure. Besides this, we have five different representations of the aggregation, since there are five attributes to consider in this study: gender (Male, Female or Unisex), age group (Baby, Kid, Baby Kid, Teen or Adult), product type (Clothing, Footwear, Accessories, Homewear, Beachwear, Swimwear, Swim accessories, Compound, Stationery, Underwear, Nightwear or Sports), season of the product (Seasonal or Permanent).

Fig. 2: A representation of one of the five level grouped structures: (Global), the global time series; brand 1 (brand 1); brand 1 crossed with gender male (1M); brand 1 crossed with gender male and seasonal items (1MS); brand 1 crossed with gender male, seasonal items and age group baby (IMSB); and, finally, brand 1 crossed with gender male, seasonal items, age group baby and product type clothing (IMSBC).

Figure 2 shows one of the alternatives to represent the group structure. It generates a total of 128 time series to be reconciled. When grouping time series, the complexity increases quickly, which requires significantly more computation resources to solve the problem. For each of the time series, we applied the best possible model to forecast that individual series using the auto.arima() function from the R Forecast package [16, 17].

Vi Results

Vi-a Baseline model - SARIMA

Based on the mathematical expression (7) for the SARIMA function, we can deduce some parameters restrictions according to the size of the time series window. For instance, if we have a time series of length 114 data points, and we choose a SARIMA model with parameters , this is an unfeasible model since it would be a linear combination of past data points given by:


and we do not have data for time period , as would be required to compute the model.

By working out the mathematical expression of the SARIMA model (Eq. 7) we can find the restrictions to the orders of the SARIMA model parameters as a function of the insample window size which is used to produce the forecasts. A forecast produced by a SARIMA model of parameters will only be feasible if the following conditions are met:


Moreover, the best coefficients for each SARIMA model are found by fitting the model to the time series and in order to have all the data points to compute them, the time series should have at least one full seasonal period for which the past data points required to compute the series are all available. As an example, if we want to attempt fitting a SARIMA model to a time series of length , we will need at least data points on top of a full seasonal period in order to have all the data points required to compute the model in at least one full seasonal period. If this is not the case, there will be some missing data and the model coefficients will not be as well-fit to the data as possible. The criteria used to estimate the quality of the model, such as the AIC and BIC, will not be well estimated in that case as well, given the missing data.

Therefore, the more general restrictions for the SARIMA model as a function of the time series length are:


Our time series has a length of data points (weeks) and a seasonal period of 52 weeks. We want to test a forecasting of 4 weeks, so we divide the time series into 106 data points for finding the best SARIMA parameters and coefficients, the training series, and 4 data points to test the forecast retrieved by that model, the testing series. The length of the training series limits our SARIMA parameters in the following way: and .

Brand 1

The sales of brand 1 vary a lot with time, which reflects in a time series with a high variance (see Figure 1). This suggests that a Box-Cox transformation can make it more stationary.

The optimal parameter can, in principle, be determined by ensuring that the standard deviation of the transformed data is minimum, which is what the BoxCox() function from the R Forecast package ([16, 17]) with lambda=”auto” computes. Even though there is a high likelihood that the data will be normally distributed after a Box-Cox transformation with lambda=”auto”, this is not guaranteed. A quantile-quantile (Q-Q) plot shows the distribution of the data against the expected normal distribution, hence it is an effective diagnosis to determine whether a sample is drawn from a normal distribution. Figure 3 shows the Q-Q plot of the Box-Cox transformed data with lambda=”auto” as well as the Q-Q plot of a simple logarithmic function, . The more data points fall in a straight line, the more normally distributed they are. If they deviate from a straight line in any systematic way, this suggests that the data is not drawn from a normal distribution. Figure 3 thus indicates that the logarithmic transformation effectively approximates the data to a normal distribution more than the Box-Cox transformation with lambda=”auto”. A Box-Cox transformation was therefore applied to the data.

Fig. 3: (Left:) Q-Q plot of brand 1 Box-Cox transformed time series with ; (Right:) Q-Q plot of brand 1 Box-Cox transformed time series with , equivalent to a logarithmic transformation. It is clear that the latter series is closer to a normal distribution, hence it is a better transformation.
Fig. 4: Brand 1 time series after Box Cox transformation with .

Figure 4 presents brand 1 1 time series after Box Cox transformation. This series displays a clear seasonal pattern with a cyclic profile with a yearly, or half-yearly, period. A seasonal differencing with a lag of 52 weeks (1 year) or 26 weeks (half-year) should, thus, improve the time series. Indeed, the time series standard deviation decreases from to and to , after applying a 52-weeks and a 26-weeks differencing, respectively, which improves the series in both cases. A first differencing of the series increases the standard deviation and a unit root test also returns no first differencing, it is thus not expected to improve the modelling of the series.

The analysis of the ACF and PACF of the yearly differenced series indicate that a SARIMA model is a good candidate to reduce the data autocorrelations. This and other SARIMA models were tested and this was the model with the lowest AIC and BIC criteria. A more complex model, SARIMA, was favoured by the AIC criteria, however, according to the restrictions in (17), this model exceeds the threshold limited by the size of the training series. Moreover, particularly when the sample size is small, the AIC criteria favours higher order (more complex) models as a result of overfitting the data. The AIC is a correction of the AIC and a more realistic criteria for smaller sample sizes, along with the BIC criteria. For these reasons, and since simpler models tend to be a better choice for accurately forecasting the general behaviour of time series, we chose .

The half-year seasonally differenced series was also analysed and we found SARIMA to be the best fit model. Its RMSE was lower than most SARIMA models with a 52-weeks period, which could indicate that its forecast was better, but the AIC, AIC and BIC were significantly higher, implying that it provides a worst fit to the time series. The Ljung-Box test of the residuals of these models yielded lower p-values than for 52-weeks period models, signaling that they are worst models. For these reasons, we consider that the best SARIMA baseline model for brand 1 is SARIMA.

Brand 1 had a major peak of sales in the first quarter of 2017 which did not repeat in 2018 and which was almost times above the average of the subsequent peaks (see Figure 1). This sales peak was due to strong campaigning for an online collection launching. It was a one-time event and the peak affects the SARIMA models, increasing its prediction for the same period of subsequent years. We performed the same analysis having this outlier peak removed and its values interpolated. The time series forecasting RMSE was reduced by 22%. However, to maintain the method’s automation capacity, we chose not to implement any outlier removal. Instead, external variables will be used to handle this type of event in a subsequent article.

Brand 2

The time series for brand 2 has a seasonal profile, which is also clear in the ACF and PACF at lag 52 (see Figure 5). Since the series shows variations which change with time, a transformation can improve the series. Similarly to brand 1, the Box-Cox transformation with improved the normalization of the time series over .

Fig. 5: Brand 2 original time series and respective ACF and PACF

Since brand 2 displays a clear seasonal profile, it is not stationary and a seasonal difference can improve the series. Even though the standard deviation of the series increases from to after seasonal differencing, the series becomes more normalized.

The series is also more stationary after a first difference and the standard deviation decreases to . A second difference would increase significantly the standard deviation, so no more differencing was applied to the series.

The ACF and PACF of the differenced series show a significant negative lag 1 of , which suggests a mild overdifferencing. This can be counterbalanced with one order of a moving average model applied to the series, which indicates that SARIMA is a good candidate model for the data.

This and other models, including different differencing orders, were tested and the BIC criteria, which favours simpler model, was used to compare them. The BIC criteria also favoured SARIMA model out of all models.

The residuals of the SARIMA model applied to brand 2 time series show all autocorrelation lags below the threshold, apart from lag 16, which corresponds to roughly a trimester and it is thus, most likely, related to the periodic behaviour of the brand. The fact that the residuals still display significant autocorrelations is also translated in a relatively low value of the LB p-value. As more data becomes available and are included in the time series, it will become more obvious whether the data autocorrelation at lag 16 is real, and, if so, it will be easier to model it.

Global time series

A global time series is formed by the sum of brand 1 and brand 2 time series. The same pre-processing and analysis performed on brand 1 and 2 time series were applied to the global time series, in the same 4 weeks period range.

As expected, the global time series also exhibits a seasonal profile, with a cyclic behaviour and significant autocorrelations at lags 52. This entails a seasonal differencing, hence . The standard deviation of the time series increases when applying a first difference and the unit root test determined that the time series is stationary so needs no first differencing, both advocating for . The model that presented lower AIC, AIC and BIC criteria was SARIMA.

The residuals of SARIMA showed no significant autocorrelations at any lag, which indicates that SARIMA is a well-fit model for the global time series. Its residuals and the high LB p-value of indicate a reasonable resemblance of residuals with white noise, further approving this model.

Fig. 6: Global time series (black) with SARIMA model fitting (blue) and 4-weeks forecasting (red).

The fitting and forecast of the global time series by SARIMA are shown in Figure 6.

Baseline models performance

The baseline models performance summary is presented in Table I

 Global 101.05 101.28 105.02 0.67 0.69 101.16
Brand 1 153.72 153.95 157.70 0.80 1.17 145.79
Brand 2 77.23 77.47 81.17 0.12 0.24 19.48
TABLE I: Baseline SARIMA models for brand 1, brand 2 and the global time series. The time period of forecast was 4 weeks.

Vi-B Grouped Time Series Forecast Reconciliation

As explained in Section V, the five different product attributes selected generated 128 different time series to reconciliate. The best possible model parameters and coefficients were determined for each of these individual series, using the auto.rima() function [16, 17]. The best individual forecasts were used as input for the reconciliation process. Table II shows the performance metrics in the out-of-sample data: independent (base) forecasts, used as baseline, bottom-up and the three approaches considered as the optimal reconciliation methods (OLS, WLS and MinT, proposed by [6] and [13]).

All approaches were tested using a horizon of 4 weeks, which was identified by HUUB as the most relevant period range to be forecasted, given the weekly data granularity. The results are presented for the three most aggregated time series, despite the fact that the grouped time series yielded results for all the possible combinations. This is also a major advantage for the business, allowing to drill down to more granular dimensions, with the reassurance that the forecasts are consistent across these dimensions. The decision making process can be significantly empowered, for instance when defining purchase quantities for the next collection for specific product type, age group, etc.

Overall, the results show that the best performing method for each series can leverage nested information in the group structure and add consistency to the outcome. This is supported by the RMSE of WLS in brand 1, which has a very significant improvement when compared to the baseline. In terms of brand 2, WLS results were very close to the baseline, which can be explained by the simpler grouping structure for this brand, i.e. it is essentially concentrated in one product type, one gender, one age group, one season. For the global time series, the baseline model performed slightly better than the reconciled best model, with an RMSE 5% lower than WLS. Overall, the RMSE indicates that WLS is a robust method for the forecasting of seasonal time series when no further information is known about what causes the time series variations.

The best performing model in terms of MASE metric is the simpler OLS method for brand 1 and global time series. For brand 2, MASE is lower for the baseline model. The MASE error of the WLS is lower than that of the baseline for the global and brand 1 series, and very similar for brand 2.

Adding more brands and more attributes can, in principle, help to increase the accuracy of the proposed models relative to the baseline and, indeed, preliminary work with a disaggregation with more attributes is showing a significant improvement.

Examining the proposed methods, the one that yielded better results in terms of RMSE was the WLS, outperforming the very ineffective bottom-up, but also the simpler OLS and the new MinT approach.

Data Model MASE RMSE
Global Baseline 0.69 101.16
Bottom-up 1.16 180.93
OLS 0.63 108.08
WLS 0.67 105.96
MinT 0.72 109.51
Brand 1 Baseline 1.17 145.79
Bottom-up 1.74 189.41
OLS 0.81 113.21
WLS 0.86 107.72
MinT 0.94 110.21
Brand 2 Baseline 0.24 19.48
Bottom-up 0.31 26.07
OLS 0.30 22.33
WLS 0.26 19.55
MinT 0.25 20.18
TABLE II: Models performance in the out-of-sample data of the baseline forecasts, bottom-up and the approaches considered as the optimal reconciliation methods: OLS, WLS and MinT. The best performing models according to the RMSE are highlighted in bold.

Vii Conclusions and Future Work

This work explored the optimal forecasts method for hierarchical and grouped time series proposed by [5, 13] to forecast the weekly sales of two fashion brands under HUUB’s monitoring. The data were disaggregated according to five selected attributes: brand, gender, whether the item is seasonal or permanent, age group and product type. This disaggregation resulted in a grouped structure with 128 individual time series. These series were individually fit by the best SARIMA model found by the auto.arima() function and were then reconciled following the optimal forecasts method. Three difference cases of the optimal forecasts method were explored: OLS, WLS and MinT, as well as the simple Bottom-up approach.

The forecasts performance of the three most aggregate series was evaluated and compared to the forecasts obtained by baseline SARIMA models. Each of these baseline SARIMA model was found by manually analysing the time series ACF, PACF and through stationarity and differencing tests in order to determine the best model parameters and coefficients.

The comparison of the performance of the optimal forecasts method and the baseline SARIMA models showed that WLS was the forecast method with the lowest RMSE for the two brands time series. The global time series was slightly better forecasted by the baseline model. Overall, the RMSE indicates that WLS is a robust method for the forescasting of seasonal time series when no further information is known about what causes the time series variations. Preliminary work with a higher-level disaggregation with more attributes has shown a significant improvement of the RMSE of the global time series forecast. These findings will be presented in a subsequent article.

Out of the three explored methods, WLS was the one with the highest RMSE accuracy followed by MinT and then OLS. All of them performed better than the less efficient bottom-up method, as expected.

The proposed work was intentionally focused on univariate time series models, providing a strong baseline and framework to build more complex forecasting models. Future work will contemplate the addition of more brands and more attributes (not only product attributes but others like sales markets geography structures), increasing the capacity of the model to capture even more nested information in these granular series. One possibility is the use of hybrid models, that combine statistics and machine learning, already proven to generate better results than statistical ones [8].

Another aspect that our work will exploit is the use of multivariate models, since they can add significant information on the volatile and unexpected behaviour observed in the data.


  1. R. Lenort and P. Besta, “Hierarchical sales forecasting system for apparel companies and supply chains,” Fibres and Textiles in Eastern Europe, vol. 21, no. 6, pp. 7–11, 2013.
  2. C. Capistrán, C. Constandse, and M. Ramos-Francia, “Multi-horizon inflation forecasts using disaggregated data,” Economic Modelling, vol. 27, no. 3, pp. 666–677, 2010.
  3. G. Athanasopoulos, R. A. Ahmed, and R. J. Hyndman, “Hierarchical forecasts for australian domestic tourism,” International Journal of Forecasting, vol. 25, no. 1, pp. 146–166, 2009.
  4. H. L. Shang and S. Haberman, “Grouped multivariate and functional time series forecasting: An application to annuity pricing,” Insurance: Mathematics and Economics, vol. 75, pp. 166–179, 2017.
  5. R. J. Hyndman, R. A. Ahmed, G. Athanasopoulos, and H. L. Shang, “Optimal combination forecasts for hierarchical time series,” Computational Statistics & Data Analysis, vol. 55, no. 9, pp. 2579–2589, 2011.
  6. R. J. Hyndman, A. J. Lee, and E. Wang, “Fast computation of reconciled forecasts for hierarchical and grouped time series,” Computational Statistics & Data Analysis, vol. 97, pp. 16–32, 2016.
  7. S. Makridakis, E. Spiliotis, and V. Assimakopoulos, “Statistical and machine learning forecasting methods: Concerns and ways forward,” PLoS ONE, vol. 13, 03 2018.
  8. ——, “The M4 Competition: Results, findings, conclusion and way forward,” International Journal of Forecasting, vol. 34, no. 4, pp. 802–808, 2018.
  9. R. Adhikari and R. K. Agrawal, “An introductory study on time series modeling and forecasting,” CoRR, vol. abs/1302.6613, 2013.
  10. R. J. Hyndman and G. Athanasopoulos, Forecasting: principles and practice.   OTexts, 2018.
  11. G. M. LJUNG and G. E. P. BOX, “On a measure of lack of fit in time series models,” Biometrika, vol. 65, no. 2, pp. 297–303, 1978.
  12. G. Fliedner, “Hierarchical forecasting: issues and use guidelines,” Industrial Management & Data Systems, vol. 101, no. 1, pp. 5–12, 2001.
  13. S. L Wickramasuriya, G. Athanasopoulos, and R. Hyndman, “Optimal forecast reconciliation for hierarchical and grouped time series through trace minimization,” Journal of the American Statistical Association, pp. 1–45, 03 2018.
  14. G. E. Box and G. M. Jenkins, Time series analysis: forecasting and control, revised ed.   Holden-Day, 1976.
  15. N. S. Arunraj, D. Ahrens, and M. Fernandes, “Application of SARIMAX model to forecast daily sales in food retail industry,” IJORIS, vol. 7, no. 2, pp. 1–21, 2016.
  16. R. J. Hyndman and Y. Khandakar, “Automatic time series forecasting: the forecast package for R,” Journal of Statistical Software, vol. 26, no. 3, pp. 1–22, 2008.
  17. R. Hyndman, G. Athanasopoulos, C. Bergmeir, G. Caceres, L. Chhay, M. O’Hara-Wild, F. Petropoulos, S. Razbash, E. Wang, and F. Yasmeen, forecast: Forecasting functions for time series and linear models, 2018, r package version 8.4.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description