A latent variable model to measure exposure diversification in the Austrian interbank market
We propose a statistical model for weighted temporal networks capable of measuring the level of heterogeneity in a financial system. Our model focuses on the level of diversification of financial institutions; that is, whether they are more inclined to distribute their assets equally among partners, or if they rather concentrate their commitment towards a limited number of institutions. Crucially, a Markov property is introduced to capture time dependencies and to make our measures comparable across time. We apply the model on an original dataset of Austrian interbank exposures. The temporal span encompasses the onset and development of the financial crisis in 2008 as well as the beginnings of European sovereign debt crisis in 2011. Our analysis highlights an overall increasing trend for network homogeneity, whereby core banks have a tendency to distribute their market exposures more equally across their partners.
Keywords: Latent Variable Models, Dynamic Networks, Austrian Interbank Market, Systemic Risk, Bayesian Inference
During the past 10 years, the EU was hit by two major financial crises. In 2008, the problems started initially in the US subprime mortgage market and were partially caused by lax regulation and overly confident debt ratings. The source of the European sovereign debt crisis in 2011, however, was most likely private debt arising from property bubble and resulting in government bailouts. The lack of a common fiscal union in the EU did not help with the situation, which resulted in the European central bank providing cheap loans to maintain a steady cash flow between EU banks. During these turbulent times, European banks were facing high levels of uncertainty. It was not clear which counterparty would remain solvent in the foreseeable future and even sovereign bonds were no longer considered the safest option. In the face of these unfavorable conditions, the banks were forced to reconsider their interbank investments and re-adjust their portfolios in order to account for the change in the economic situation.
In this paper, we study interbank exposures in Austria between the spring of 2008 and autumn of 2011. Namely, we introduce a dynamic network model to analyze banks exposures’ diversification patterns as well as the overall trend towards diversification in the Austrian interbank market. To accomplish this task, we create an original latent variable model that allows one to analyze weighted networks evolving over time. This approach provides us with a model-based measure of systemic risk locally for each bank, but also globally for the financial system as a whole. In our application, we show that our model-based measure provides a qualitatively different view when compared with a basic descriptive statistic.
Our paper contributes to the literature on systemic risk and financial networks. This strand of literature has often focused on the stability of financial systems as well as the possibility of contagious defaults. One of the earliest papers on the topic was the work of Allen and Gale (2000), who have shown that the structure of the interbank market is important for the evaluation of possible contagious bankruptcies. Later on, Gai and Kapadia (2010) extended their work from a simple model of four institutions to a financial network of an arbitrary size. Other notable papers on systemic risk include, for example, Glasserman and Young (2016) or Acemoglu et al. (2015), while Upper (2011) provides an excellent survey of regulatory-published scientific reports on the subject. With respect to the questions on diversification, we refer the reader to Elliott et al. (2014) and Frey and Hledik (2014) where a nontrivial relationship between diversification and contagious defaults is presented, or to Goncharenko et al. (2015) where banks endogenously choose their level of diversification in an equilibrium setting. Our paper relates to these works, since it is the structure of a financial network we are studying, while exploring the questions regarding diversification at the same time. We further add to these papers introducing a new generative mechanism and a modelling framework where diversification and homogeneity of the system can be studied inter-temporally.
Another strand of literature that we are connecting to are articles on latent variable modelling of network data. Prominent examples of such latent variable approaches include the latent position models of Hoff et al. (2002), later extended to the dynamic framework by Sarkar and Moore (2006), and the latent stochastic blockmodels (Nowicki and Snijders, 2001) extended to a dynamic framework by Yang et al. (2011), Xu and Hero (2014) and Matias and Miele (2017), among others. These latent variable models possess a number of desirable theoretical features, as illustrated in Rastelli et al. (2016) and Daudin et al. (2008), respectively.
Our approach has a number of similarities with other recent papers that apply a latent variable framework to model networks. These include Friel et al. (2016), where the authors introduce a dynamic latent position model to measure the financial instability of the Irish Stock Exchange; Sewell and Chen (2016) who introduce a modeling framework for dynamic weighted networks; but also McLaughlin and EmBree (2018), where the authors propose a framework to reconstruct a collaboration network. Further related works include Chakrabarti (2017), where incentives of twitter users are analyzed; Ji and Jin (2016) where meta-analysis of citations in statistics papers are conducted; and Xin et al. (2017), where compatibility of basketball players are analyzed via a network model. We connect our paper to these works by developing a latent variable statistical model for dynamic weighted networks, where a persistence (drift) is modeled intuitively. Our model is specifically designed for instances where a network needs to be characterized by a single evolving variable, or when one is interested in obtaining a model-based quantitative measurement of the inter-temporal development of the homogeneity of a network.
Lastly, we contribute to the literature on stability of Austrian interbank market. Other works in this area include Elsinger et al. (2006) and Boss et al. (2004) who have looked at possible contagious effects and descriptive statistics of the Austrian financial network We extend their work by creating a statistical model of network evolution applied on the same dataset.
1 A Motivating Example
It is important to understand that a change in a financial network structure can have far-reaching and non-trivial consequences. To illustrate this fact further, consider a hypothetical financial network of four institutions (banks) represented by nodes with their mutual financial exposures (debt) represented by edges. In this simple example, connections are symmetric and every bank splits its investment into its neighbors equally. Furthermore, banks are required by a regulator to always keep a capital buffer to account for unexpected withdrawals, unfavorable economic conditions and other factors. Therefore, we assume that an institution remains safe unless it loses at least half of its investment. If that happens, the institution gets bankrupt and it might further negatively affect other banks in the network. To see how network structure affects the overall stability, consider a case where one of these four banks gets affected by an exogenous shock such that it has to declare bankruptcy. In such case, its neighbors will not get their respective investment and might suffer the same fate, putting their own neighbors in danger. This contagious behavior is dependent on how the banks are linked together, which illustrates the importance of structure when addressing questions on systemic importance and financial stability.
For the hypothetical case of four banks, there are 11 different network structures that can possibly occur: a subset of these are shown in Figure 1.
In the case shown in Figure (a)a, there is no danger of contagion since there are no edges to propagate shocks. An analogical result follows from the network shown in Figure (c)c, where a failure of one node is not sufficient to take down the rest because every institution is well diversified (remember that we have assumed a bank needs to lose at least half of its investment to fail while in this case neighbors of the bankrupt institution only lose one third). Problems arise in intermediately connected systems such as (b)b, where an initial shock wipes out the whole system. This basic example hints at a much more complex issue of network stability that has been extensively studied by financial regulators in the past two decades. More importantly, it highlights that the level of diversification in a system may play a crucial role in determining its stability and that assessment of this trait for observed networks can prove challenging. In this paper, we address this impasse, introducing a statistical model which is specifically designed to measure the diversification of a financial system, hence obtaining a measure for one of the facets of systemic risk.
2 Networks of Interbank Exposures
A dynamic network of interbank exposures is a sequence of graphs where, for each time frame, the nodes correspond to banks and the edges correspond to the connections between them. In particular, the edges are directed and carry positive values indicating the claim of one bank to another. We note that an observed network of interbank exposures between banks over time frames may be represented as a collection of adjacency matrices of the same size , as in the following definition:
A sequence of absolute exposures defined on the set of nodes over the timespan consists of adjacency matrices with elements for , , , where corresponds to a financial exposure of bank towards bank in period .
In this paper, we study the Austrian interbank market, where an adjacency matrix contains all of the mutual claims between any two of Austrian banks at the corresponding time frame.
A sequence of relative exposures on the set of nodes over the timespan has elements defined as follows:
This data is observed quarterly, from the spring of 2008 to the autumn of 2012, hence, adjacency matrices are available in total (see Appendix A for further details on data structure). The sequence of relative exposures is important in both the exploratory analysis we have conducted as well as in our main model.
To model the dynamic evolution of a network, we assume discrete time steps in order to accommodate for our quarterly-observed data. A continuous time model in the spirit of Koskinen and Edling (2012) would constitute a possible extension of our model.
Throughout the paper, we deal with different probability distributions. The normal distribution with mean and variance shall be denoted by , the Gamma distribution with shape parameter and scale parameter shall be referred to as Gamma(,) and a Dirichlet distribution parametrized by a vector shall be referred to as Dir().
3 Data and Exploratory Analysis
In order to study the viability and implications of our model in practice, we use a unique dataset obtained from the Austrian National Bank (OeNB). As already mentioned before, it contains credit relationships between 800 Austrian financial institutions in the time period spanning from the spring of 2008 until the autumn of 2011. We create a subsample of “core banks” to see how the implications of our model are affected by the banks’ size. In order to do so, we look at each bank’s relevance:
A relevance of bank in time period is defined as:
In other words, we define relevance simply as the bank’s overall sum of its interbank assets and liabilities.
With a clear measure of systemic importance, we can now select a subsample of banks with the highest aggregated relevance . This allows us to focus on the interactions of systemically important banks and observe whether there are some unique patterns. We use the aggregated relevance measure to create a new smaller dataset consisting of the 100 most systemically relevant institutions and the connections between them. From now on, we shall refer to the full dataset and the reduced dataset as OeNB 800 and OeNB 100, respectively111Validity of the OeNB 100 subset can be justified further by examining the overall exposure of top 100 institutions. It turns out that the 100 most systemically relevant banks account for more than 95% of all edge weights in any given time frame. In other words, the 100 most systemically relevant banks are the ones responsible for the vast majority of all exposures within the system, which makes their closer examination interesting. We plot the evolution of the average bank relevance in Figure 2. As expected from its definition, we see a sharp drop in the second half of 2008 as a direct effect of the financial crisis.
In order to have a better picture about the data, we have conducted a brief exploratory analysis of our dataset. In particular, it is interesting to see the evolution of connections in the sample. Table 1 and Figure 3 contain brief descriptive statistics, where one can see the number as well as magnitude of connections as a function of time. The number of connections ( column) shows the number of edges in the network as of time , while the relative size ( column) depicts the overall cash flow in the market, scaled according to the first observation. We would like to highlight the second and third quarter of 2008, where a drop in the overall magnitude of cash flow in the economy can be observed. This period corresponds to the financial crisis associated with the failure of Lehman Brothers in the US and the problems stemming from the housing market. Interestingly, in the Austrian interbank market, the overall number of connections does not seem to be affected by these events as much as their size. This shows that, albeit Austrian banks have reduced their mutual exposures significantly, they were rarely completely cut off. Another important period is during the second and third quarter of 2011, which is roughly when the European sovereign debt crisis started. At the first glance, there does not seem to be much in relation to this event in our data. However, as we shall see later, our main model will provide further insight regarding the trend in diversification during this period.
|No. of||Relative size|
In interbank markets, it is common to observe disassortative properties in the system, which roughly translates to nodes with a low number of neighbors being connected to nodes with high number of neighbors and vice versa (see Hurd (2016)) . This property in financial networks is quite common and different from social networks where individuals with a high number of “friends” tend to create “hubs” in the network, see for instance Li et al. (2014). Financial systems also tend to be very sparse. We observe the same patterns in the Austrian interbank market, as can be seen from Figure 4.
We have observed several interesting patterns in the data which suggest that using a more involved model could indeed produce some new insights to the evolution of bank diversification. Since the main interest of our research lies in the diversification of agents in an interbank market, we have also looked at the evolution of bank entropy in time. For this purpose, we use a standard definition of entropy as follows:
The entropy of node at time is defined as:
Speaking more plainly, this quantity describes how an institution distributes its assets among counterparties. A bank with only one debtor would have entropy equal to zero, since its relative exposure is one for that debtor and zero for all the other banks. With an increased number of debtors with equal exposures, a node’s entropy is increased and, for a fixed number of debtors, the entropy of a node is maximized when its assets are distributed evenly among neighbors. Ergo, if two nodes have the same number of outgoing connections, one may view the one with a higher entropy as better diversified.
In Figure 5, we plot the change in nodes’ entropies in consecutive periods (). One can observe an increase in both mean and variance during the 13th to 14th period, which corresponds to the sovereign crisis in Europe. At that point, future bailouts of several EU countries were uncertain which might have added to the volatility in the market. Interestingly, no similar effect can be seen during the 2008 crisis. In our model, we take all information from this exploratory analysis into account and we further focus on the diversification part of the story. In addition to the analysis on bank entropy, we extend this line of reasoning and create a measure of overall trend in diversification. There are other ways of assessing the temporal evolution of node exposure homogeneity. We have chosen entropy for our exploratory analysis, as it constitutes a simple, clean and easily tractable approach, but one could easily turn to other measures, e.g. the Herfindahl index as is common practice in economics literature.
4 The Model
We use the relative interbank exposures from Definition 2.2, assuming that there are no self-connections such that when not stated otherwise, we always work with , and . As these are relative exposures, it follows from definition that they satisfy:
We propose to model the vector as a Dirichlet random vector characterized by the parameters , where . Following the established standard in latent variable models, the data are assumed to be conditionally independent given the latent parameters . Hence, the model likelihood reads as follows:
where, again, varies in and is different from , and denotes the gamma function.
As concerns the parameters, we separate a trend component from the sender and receiver random effects through the following deterministic representation:
With this formulation, the model parameters , and possess a straightforward interpretation.
4.1 Interpretation of model parameters
Before we move to parameter interpretation, we would like to note that the effect of on a symmetric random vector . Namely, it is important to see that the variance of Y decreases with an increase in . Since the values generated from a Dirichlet distribution lie in an -dimensional simplex, low variance translates to , e.g. the values are more or less equally distributed. High variance, however, implies that one of the components turns out to be close to one while all the others are close to zero. This mechanic closely mimics the high-entropy homogeneous regime and the low-entropy heterogeneous regime introduced in Section 3, respectively.
In fact, in our formulation, the contribution given by affects all of the components of in a symmetric fashion. Hence, we are essentially capturing the level of homogeneity in the network through a homogeneity trend parameter and a node specific homogeneity random effect . In other words, an increase in corresponds to higher diversification of exposures for bank at time , resulting in a more homogeneous network structure. Vice versa, a decrease in is linked with a decrease in diversification which in turn results in a more heterogeneous network structure.
The interpretation of is similar. To see this, consider a non-symmetric random vector . In this case, an increase in a single parameter component determines a higher expected value in , at the expense of the other elements in Y. In our context, an increase in tends to increase the weight of all edges that receives from its counterparties. Equivalently, one can say that in such case the bank becomes more attractive, in the spirit of other banks concentrating their exposures more towards .
the parameter indicates the global homogeneity level at time frame ;
the parameter characterizes the individual bank homogeneity level as a random effect;
the parameter represents the bank ’s attractiveness.
4.2 Bayesian hierarchical structure
We complete our model by introducing the following Bayesian hierarchical structure on the parameters we have mentioned earlier.
We assume a random walk process prior on the drift parameters as follows:
where and . The hyperparameter is user-defined and set to a small value to support a wide range of initial conditions. The hyperparameters and are also user-defined and set to small values () for non-informative settings.
The parameters and are assumed to be i.i.d. Gaussian variables with:
As well as with the other hyperparameters, , , and are also set to small values () to keep a non-informative setting.
The arrangement of parameters in Figure 6 summarizes the dependencies in our model graphically.
5 Parameter estimation
Our proposed model has drift parameters (), diversification parameters (), attractiveness parameters (), and precision parameters (). We use this section to describe their estimation procedure.
The additive structure in (6) yields a non-identifiable likelihood model. For example, one may define and for some and the likelihood value would be the same for the two configurations, i.e. . One way to deal with such identifiability problem would be to include a penalization through the priors on and . One could specify more informative Gaussian priors centered in zero, which would in turn shrink the parameters to be distributed around zero.
However, such approach may also interfere with the results, since the model would not be able to capture the presence of outliers. Hence, we opt for a more commonly accepted method, and impose the s to sum to zero. This is expressed through the following constraint:
This new model, characterized by parameters, is now identifiable.
5.2 Markov chain Monte Carlo
The posterior distribution associated to our model factorizes as follows:
We adopt a fully Bayesian approach, relying on a Markov chain Monte Carlo to obtain a random sample from the posterior distribution (8). Note that, in the following equations, the products are defined over the spaces and , with the only restriction that and are always different from . Also, is equal to if the event is true or zero otherwise. We use a Metropolis-within-Gibbs sampler that alternates the following steps:
Sample for all from the following full-conditional using Metropolis-Hastings with a Gaussian proposal:
Sample for all from the following full-conditional using Metropolis-Hastings with a Gaussian proposal:
Sample for all from the following full-conditional using Metropolis-Hastings with a Gaussian proposal:
Sample from the following conjugate full-conditional:
Sample from the following conjugate full-conditional:
Sample from the following conjugate full-conditional:
The random draws obtained for the model parameters are then used to characterize their posterior distribution given the data.
5.3 Additional details
We ran our Metropolis-within-Gibbs sampler on both datasets OeNB 800 and OeNB 100 for a total of iterations. For both datasets, the first iterations were discarded as burn-in. For the remaining sample, every -th draw was saved to produce the final results. In summary, we obtained posterior draws for each model parameter.
For the parameters , and Metropolis-Hastings updates were used: the Gaussian proposal variances were tuned individually for each parameter to make sure that all of the acceptance rates were between and . The trace plots and convergence diagnostic tests all showed very good convergence of the Markov chain.
Similarly to many other latent variable models for networks, the computational cost required by our sampler grows as . We implemented the algorithm in C++ and used parallel computing via the library OpenMPI to speed up the procedure. We note that, for the full dataset, an iteration required an average of approximately seconds on a Debian machine with cores. The code is available from the authors upon request.
6 Analysis of the results
First, we study the diversification of the banks which translates to changes in network homogeneity. The drift parameter , shown in Figure 7, exhibits an upward trend for both datasets.
This trend is in both cases more pronounced during the onset of the 2011 sovereign debt crisis. Furthermore, we observe a sharper increase in OeNB 100 during this time period. This signals that larger and systemically relevant banks were the ones with a stronger reaction to the crisis. This shows not only their change in diversification policy as a reaction to the crisis, but also a difference in risk appetite for the two classes of agents in the network. Interestingly, we do not observe similar behavior during the crisis in 2008.
In the exploratory analysis conducted earlier, we have seen a substantial drop in overall size of exposures in 2008 and almost no such effect in 2011. Paradoxically, 2011 is the time when we observe a large upward shift in diversification, while the same effect in 2008 is limited at best. One takeaway from this would be that Austrian banks have perceived the sovereign crisis as a bigger threat than the 2008 crisis stemming from the US housing market. Furthermore, the effect is more pronounced in the OeNB 100 sample. This hints at the fact that bigger banks tend to react stronger in the face of adverse conditions by increasing their level of diversification, while less relevant banks tend to keep their exposures less diversified.
Besides the overall development of diversification in the system, we also study the local interaction of banks in the sample. This can be achieved by observing parameters which characterize their individual diversification appetite.
First, we analyze point estimates of these parameters: Figure 8 shows the distribution of the posterior means for .
For both OeNB 800 as well as OeNB 100, the distribution seems to be rather heavy tailed. This translates to a system where the majority of banks exhibits low diversification, but still a fairly large number of banks tends to diversify much more. In fact, Figure 9 highlights that more relevant banks tend to have a more pronounced diversification, whereas small banks do not diversify as much.
This observation further confirms our ideas about a stylized financial network where the disassortative behavior is very common.
A similarly heavy tailed distribution can be observed regarding the attractiveness parameter (see Figure 10 for the distribution of the point estimates).
In addition, Figure 11 shows that, generally, and are closely related in both datasets.
This figure highlights that larger banks tend to be more diversified and more attractive simultaneously, and, vice versa, small banks often play a role in the periphery of the network as offsprings of a larger bank. A similar observation of heavy-tailedness in degree distribution has also been reported by Boss et al. (2004).
As concerns the uncertainty around the point estimates, Figure 12 compares the posterior variances for all of the estimator with the corresponding parameters.
We note that there seems to be no explicit pattern and no apparent relation with the relevance of the corresponding banks. We point out, however, that the two plots are on two different scales on both axes, which is expected since much more data is available for inference in the OeNB 800 dataset, hence yielding more reliable estimates.
Finally, we also show the posterior densities for the variance parameters and in Figure 13.
For both datasets, these plots confirm that the drift parameter is rather stable over time, and that the diversification and attractiveness are not particularly diverse across banks, overall.
This paper’s main contribution is to propose a brand new framework to model the evolution of dynamic weighted networks, and to capture systematic parts of their development. Our application to the Austrian interbank market serves as an example of how such model can be used in practice as a means to measure exposure diversification and, hence, one aspect of systemic risk. In our analysis we have shown that the Austrian market exhibited a sustained increase in banks’ diversification, possibly as a reaction to the 2008 financial crisis. In particular, differently from a descriptive analysis, our model captured a distinct upward dynamic in network homogeneity as a response to the sovereign debt crisis of 2011. These findings may be of a particular use to regulators and central banks to assess and design future policy.
Our results also showed that the roles played by the different banks can be vastly different, particularly in the context of exposure diversification. Our findings emphasize that larger banks, which are generally more susceptible to systemic risk, tend to use more conservative strategies and to spread out evenly their credit risks.
One limitation of our modeling framework is that it only focuses on the relative exposures, hence discarding the real magnitudes of the claims. Future extensions of this work may consider a joint modeling of the exposure values and how they are diversified among neighbors.
Another possible extension of our framework would include a more sophisticated prior structure on the model parameters. For example, one may define a clustering problem on the banks, where different clusters are characterized by different network homogeneity drifts .
Finally, we would like to remark that the Dirichlet likelihood specification is not the only possible one. Besides, the Dirichlet distribution is known to exhibit very little flexibility, since, when the variance is large, it tends to assign most of the probability density to the highest entropy configurations. This does not necessarily reflect the features exhibited by the data. However, we argue that in our application the Dirichlet assumption is very reasonable, and, more importantly, it provides a convenient framework with a straightforward interpretation of the model parameters.
- Acemoglu et al. (2015) Acemoglu, D., A. Ozdaglar, and A. Tahbaz-Salehi, 2015, Systemic risk and stability in financial networks, American Economic Review 105, 564–608.
- Allen and Gale (2000) Allen, F., and D. Gale, 2000, Financial contagion, Journal of political economy 108, 1–33.
- Boss et al. (2004) Boss, M., H. Elsinger, M. Summer, and S. Thurner, 2004, Network topology of the interbank market, Quantitative finance 4, 677–684.
- Chakrabarti (2017) Chakrabarti, D., 2017, Modeling node incentives in directed networks, The Annals of Applied Statistics 11, 2298–2331.
- Daudin et al. (2008) Daudin, J.-J., F. Picard, and S. Robin, 2008, A mixture model for random graphs, Statistics and computing 18, 173–183.
- Elliott et al. (2014) Elliott, M., B. Golub, and M. O. Jackson, 2014, Financial networks and contagion, American Economic Review 104, 3115–53.
- Elsinger et al. (2006) Elsinger, H., A. Lehar, and M. Summer, 2006, Risk assessment for banking systems, Management science 52, 1301–1314.
- Frey and Hledik (2014) Frey, R., and J. Hledik, 2014, Correlation and contagion as sources of systemic risk .
- Friel et al. (2016) Friel, N., R. Rastelli, J. Wyse, and A. E Raftery, 2016, Interlocking directorates in irish companies using a latent space model for bipartite networks, Proceedings of the National Academy of Sciences 113, 6629–6634.
- Gai and Kapadia (2010) Gai, P., and S. Kapadia, 2010, Contagion in financial networks, in Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, rspa20090410, The Royal Society.
- Glasserman and Young (2016) Glasserman, P., and H. P. Young, 2016, Contagion in financial networks, Journal of Economic Literature 54, 779–831.
- Goncharenko et al. (2015) Goncharenko, R., J. Hledik, and R. Pinto, 2015, The dark side of stress test: Negative effects of information disclosure .
- Hoff et al. (2002) Hoff, P. D., A. E. Raftery, and M. S. Handcock, 2002, Latent space approaches to social network analysis, Journal of the American Statistical Association 97, 1090–1098.
- Hurd (2016) Hurd, T. R., 2016, Contagion! Systemic Risk in Financial Networks (Springer).
- Ji and Jin (2016) Ji, P., and J. Jin, 2016, Coauthorship and citation networks for statisticians, The Annals of Applied Statistics 10, 1779–1812.
- Koskinen and Edling (2012) Koskinen, J., and C. Edling, 2012, Modelling the evolution of a bipartite network peer referral in interlocking directorates, Social Networks 34, 309–322.
- Li et al. (2014) Li, M., S. Guan, C. Wu, X. Gong, K. Li, J. Wu, Z. Di, and C. H. Lai, 2014, From sparse to dense and from assortative to disassortative in online social networks, Scientific reports 4, 4861.
- Matias and Miele (2017) Matias, C., and V. Miele, 2017, Statistical clustering of temporal networks through a dynamic stochastic block model, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 79, 1119–1141.
- McLaughlin and EmBree (2018) McLaughlin, K. R., and J. D. EmBree, 2018, Empirical assessment of programs to promote collaboration: A network model approach, The Annals of Applied Statistics 12, 654–682.
- Nowicki and Snijders (2001) Nowicki, K., and T. A. B. Snijders, 2001, Estimation and prediction for stochastic blockstructures, Journal of the American Statistical Association 96, 1077–1087.
- Rastelli et al. (2016) Rastelli, R., N. Friel, and A. E. Raftery, 2016, Properties of latent variable network models, Network Science 4, 407–432.
- Sarkar and Moore (2006) Sarkar, P., and A. W. Moore, 2006, Dynamic social network analysis using latent space models, in Advances in Neural Information Processing Systems, 1145–1152.
- Sewell and Chen (2016) Sewell, D. K., and Y. Chen, 2016, Latent space models for dynamic networks with weighted edges, Social Networks 44, 105–116.
- Upper (2011) Upper, C., 2011, Simulation methods to assess the danger of contagion in interbank markets, Journal of Financial Stability 7, 111–125.
- Xin et al. (2017) Xin, L., M. Zhu, and H. Chipman, 2017, A continuous-time stochastic block model for basketball networks, The Annals of Applied Statistics 11, 553–597.
- Xu and Hero (2014) Xu, K. S., and A. O. Hero, 2014, Dynamic stochastic blockmodels for time-evolving social networks, IEEE Journal of Selected Topics in Signal Processing 8, 552–562.
- Yang et al. (2011) Yang, T., Y. Chi, S. Zhu, Y. Gong, and R. Jin, 2011, Detecting communities and their evolutions in dynamic social networks – a bayesian approach, Machine learning 82, 157–189.
Appendix A Data Transformation
The source data from the Austrian National Bank is in the form of four variables: a timestamp, an ID of a lender bank, an ID of a borrower, and the relative exposure from one towards the other. We use the term relative since the largest exposure in each time period is assumed to be of size 1, and all other exposures in that time period are scaled accordingly to keep their relative size unchanged. As a result, in each time-period, all exposures are located in a interval with the highest exposure attaining a value of 1. Formally, making use of Definition 2.1, the observable data in our sample can be viewed as a dynamic adjacency matrix :
A sequence of observable exposures on the set of nodes over the timespan is defined as follows:
It is not possible to make inter-temporal analysis of changes in exposures while working directly with sequence , because every exposure is scaled against the highest exposure in its time period. In order to circumvent this issue and obtain information which is comparable in time, we have devised the following procedure.
We make an assumption about the stability of the Austrian market. Namely, when looking at the change of a particular edge value between two consecutive periods from to , the ratio with highest likelihood of occurrence in the sample corresponds to banks keeping the absolute value of their exposures unchanged. Indeed, after examining this ratio in all consecutive periods, we observe that the most frequent value is situated in the middle of the sample and is always a clear outlier in terms of likelihood of occurrence.222In most cases, this value is around 1 which suggests that the largest exposure in the network is mostly stable. An exception arises between dates 2 and 3 which correspond to the second and third quarter of 2008. As this is the exact time of the height of US subprime mortgage crisis, we believe that the “big players” in our dataset have been influenced by these events, resulting in the change of their exposures and subsequent substantial rescaling of the whole system. According to our methodology, the largest exposure in the network has dropped to almost one third of its value in the span of two quarters, but it returns gradually back to its former level eventually.
It’s straightforward to rescale the whole dataset using this procedure. Despite the fact that we still cannot observe the actual levels of exposures between banks in our sample, we are now able to compare them inter-temporally which is an extremely useful property.