Learning Queuing Networks by Recurrent Neural Networks
It is well known that building analytical performance models in practice is difficult because it requires a considerable degree of proficiency in the underlying mathematics. In this paper, we propose a machine-learning approach to derive performance models from data. We focus on queuing networks, and crucially exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations. We encode these equations into a recurrent neural network whose weights can be directly related to model parameters. This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model that can be used for prediction purposes such as what-if analyses and capacity planning. Using synthetic models as well as a real case study of a load-balancing system, we show the effectiveness of our technique in yielding models with high predictive power.
software performance queuing networks recurrent neural networks
Performance metrics such as throughput and response time are important factors that impact on the quality of a software system as perceived by users. They indicate how well the software behaves, thus complementing functional properties that concern what the software does. A traditional way of reasoning about the performance in a software system is by means of profiling. A tool such as Gprof executes the program and allows the identification of the program locations that are most performance sensitive . The main limitation is that this information is valid for the specific run with which the program is exercised; different inputs lead to different performance profiles in general. Thus, while profiling can detect the presence of performance anomalies, it lacks generalizing and predictive power (see also ).
As with all scientific and engineering disciplines, predictions can be made with models. Software performance models are mathematical abstractions whose analysis provides quantitative insights into real systems under consideration . Typically, these are stochastic models based on Markov chains and other higher-level formalisms such as queueing networks, stochastic process algebra, and stochastic Petri nets (see, e.g.,  for a detailed account). Although they have proved effective in describing and predicting the performance behavior of complex software systems (e.g., [8, 50]), a pressing limitation is that the current state of the art hinges on considerable craftsmanship to distill the appropriate abstraction level from a concrete software system, and relevant mathematical skills to develop, analyze, and validate the model. Indeed, the amount of knowledge required in both the problem domain and in the modeling techniques necessarily hinders their use in practice .
Despite the promises that analytical performance modeling holds, we are confronted with a high adoption barrier. A possible solution might be to derive the model automatically. There has been much research into extending higher-level descriptions such as UML diagrams with performance annotations (using for example appropriate profiles such as MARTE ) from which both software artifacts and associated performance models are generated (see the surveys [6, 36]). However, since systems are typically subjected to further modifications, the hard problem of keeping the model synchronized with the code arises . This makes such model-driven approaches particularly difficult to use in general, especially in the context of fast-paced software processes characterized by continuous integration and development.
In this paper we propose a novel methodology where analytical performance models are automatically learned from a running system using execution traces. We focus on queueing networks (QNs), a formalism that has enjoyed considerable attention in the software performance engineering community, since it has been shown to be able to capture main performance-related phenomena in software architectures , annotated UML diagrams , component-based systems , web services , and adaptive systems [3, 29].
A QN is characterized by a number of parameters that define the following quantities: i) the behavior of each shared resource, such as its service demand and the concurrency level, which describe the amount of time that a client spends at the resource and the number of independent entities that can provide the service (e.g., number of threads in the pool or number of CPU cores), respectively; ii) the behavior of clients in terms of their operational profile, i.e., how they traverse the resources.
Some of these parameters can be assumed to be known. For instance, the number of CPU cores is available from the hardware specification (or from the virtual-machine settings in a virtualized environment); the number of worker threads is a configuration parameter in most servers. Other parameters are more difficult to identify: the service demands, which depend on the execution behavior of the program that requests access to a shared resource; and the routing matrix, which defines how clients (probabilistically) move between queuing stations.
In our approach, the input is the set of shared resources and their concurrency level. The objective is to discover the QN model, i.e., the topology of the network and the service demands.
Obviously, the problem of learning a mathematical model from data is not new. In the specific case of identifying parameters of a QN, a substantial amount of research gone into the problem of estimating service demands only (, see Section 5.3 for a more detailed account of related work). Instead, we are not aware of approaches that deal with the estimation of both the service demands and the topology. This setting is a rather difficult one from a mathematical viewpoint because, as will be formalized later, routing probabilities and service demands appear as multiplicative factors in the dynamical equations that describe the evolution of a QN . Since learning a QN can be understood as fitting the parameters to match these equations by some form of optimization, using both routing probabilities and service demands as decision variables will induce a nonlinear problem, which is very difficult to handle in general. An additional problem to nonlinearity is that of scalability. This is due to the issue that the exact dynamical equations of a QN incur the well-known state explosion problem, because the number of discrete states to keep track of grows combinatorially with the number of clients and queuing stations.
Learning method: recurrent neural networks.
To cope with both issues, we propose a learning method based on recurrent neural networks (RNNs) because of their ability to fitting nonlinear systems . In particular, we develop a new architecture of the RNN which encodes the QN dynamics in an interpretable fashion, i.e., by associating the weights of the RNN with QN parameters such as concurrency levels, routing probabilities, and service rates. A key instrument is the use of a compact system of approximate (but still nonlinear) equations of the QN dynamics instead of the combinatorially large, but exact, original system of equations. Such approximation—called fluid or mean-field—consists in only one ordinary differential equation (ODE) for each station. It describes the time evolution of the queue length, i.e., the number of clients contending for that resource. In practice, the fluid approximation provides an estimate of the average queue length of the underlying stochastic process. The QN approximation procedure is based on a fundamental result by Kurtz  and is well-known in the literature, e.g., . In the field of software performance, it has been used for the analysis of variability-intensive software systems [34, 35] and for model-based runtime software adaption using online optimization  or satisfiability modulo theory approach . This formulation has also been recently adopted for learning, but for service demands only , thus casting the problem into a (considerably) simpler quadratic programming one.
The connection between RNN and ODEs is not new in the literature. In , authors have shown that recurrent neural networks can be thought of as a discretization of the continuous dynamical systems while, in  a specialized training algorithm for ODEs has been recently proposed. However, despite the proliferation of works along this research direction, still there is no clear understanding of how to employ such artificial intelligence/machine learning techniques for supporting performance engineering tasks such as modeling, estimation, and optimization .
The main technical contribution of this paper is to show that there is a direct association between the structure of the QN fluid approximation and standard activation functions and layers of an RNN. To the best of our knowledge, this is the first approach that formally unifies the expressiveness of analytical performance models with the learning capability of machine learning, contributing to positively answering the question whether “AI will be at the core of performance engineering” .
The RNN is trained using time series of measured queue lengths at each service station. Its learned weights can be interpreted back as a QN with learned parameters, which can be used for predictive purposes. It is worth remarking that, in principle, one could learn a QN model by relying on a standard, black-box RNN architecture by treating all the QN parameters (i.e., initial population, service demand, number of servers and routing probabilities) as input features of the learning algorithm. Unfortunately, this straightforward approach would require a considerable amount of input traces since the learning algorithm could not exploit the structural information about the problem. For instance, it would not be possible to do accurate what-if analyses by varying the value of a parameter if the network had not been trained with some input configurations where few variations of that parameter are considered. Moreover, in such setting, it would even be unclear which weights must be altered and how to reflect the changes into the model.
Instead, here we report on the effectiveness and the generalizing power of our method by considering both synthetic benchmarks on randomly generated QNs, as well as a real web application deployed according to the load balancing architectural style. In both cases, we evaluate the degree of predictive power of the learned model in matching the transient as well as steady-state dynamics of unseen configurations (i.e., by varying the system workload, number of servers, and routing probabilities), reporting prediction errors less than 10% across a validation set of 2000 instances.
We provide some background about QNs in Section 2. The learning methodology is presented in Section 3, which discusses how to encode a time-discretized version of the fluid approximation into an RNN where the weights represent the model parameters to identify. Section 4 presents the numerical evaluation on both the synthetic benchmarks and the real case study, providing implementation details on the RNN and on the used benchmark application. Section 5 discusses further related work. Section 6 concludes.
To make the paper self-contained, we present some background on QNs with the objective of motivating the fluid approximation as a deterministic estimator of average queue lengths, which will be used for the RNN encoding.
2.1 Queuing Networks
We assume closed QNs, where clients keep circulating between queuing stations. A closed QN is formally defined by the following:
: the number of clients in the network;
: the number of queuing stations;
: the vector of concurrency levels, where gives the number of independent servers at station , with ;
: the vector of service rates, i.e., is the mean service demand at station , with ;
: the routing probability matrix, where each element gives the probability that a client goes to station upon completion at station ;
: the initial condition, i.e., is the number of clients at station at time 0.
In a closed QN, the routing probability matrix is stochastic matrix, meaning that the sum across each row sums up to one.
In the remainder of this section we use the QN in Fig. 1 as a running example. Depicted using the customary graphical representation, it represents a simple load-balancing system with stations. Requests from reference station 1 are routed to two compute server stations 2 and 3 with probabilities and , respectively. Upon service, a client returns back to station 1. An instantiation of this abstract model is discussed in Section 4. ∎
Markov chain semantics
The stochastic behavior of a QN is represented by a continuous-time Markov chain (CTMC) that tracks the probability of the QN having a given configuration of the queue lengths at each station. Informally, the CTMC is constructed as follows. A discrete CTMC state is a vector of queue lengths . At each station , if the number of clients is less than or equal to the number of servers , then these proceed in parallel, each at rate . Instead, if the number of clients that are queueing for service is . When one client is serviced at station , with probability it goes to station to receive further service. This can be formalized by considering the well-known model of Markov population processes, whereby the CTMC transitions are described by jump vectors and associated transition functions from a generic state .
We define the jump vectors to be the state updates due to clients moving to station upon service at , and the transition rate from state to state , where
In other words, with the jump vector we have that the number of clients at station is decreased by one, and, correspondingly, the number of clients at station is increased by one. Then, the CTMC is defined by:
In our running example, we have the jump vectors
where the first row describes the updates due to a client being assigned to each compute server and the second row defines the client returning to the load balancer after service. For completeness we give the corresponding transitions:
It is well known that a CTMC is completely characterized by the transitions (1) together with the initial condition . This formulation in terms of jump vectors allows for the efficient stochastic simulation of CTMCs ; indeed, we will use this technique to generate sample paths for the evaluation of our learning method on synthetic benchmarks in Section 4. For our purposes, the main limitation of this CTMC representation is that the exact equations to analyze the probability distribution grow combinatorially with the number of clients and stations, as one needs to keep track of each possible discrete configuration of the queue lengths.
2.2 Fluid Approximation
The fluid approximation of a QN consists is an ODE system whose size is equal to the number of stations , independently from the number of clients in the system. Informally, the ODE system can be built by considering the average impact that each transition has on the queue length at each station . This is obtained by multiplying the -th coordinate of each jump vector, , by the function associated with the corresponding transition rate . Denoting by the variables of the fluid approximation, the ODE system is given by:
The solution for each coordinate, , can be interpreted as an approximation of the average queue length at time as given by the CTMC semantics . The theorems in  provide a result of asymptotic exactness of the fluid approximation, in the sense that the ODE solution and the expectation of the stochastic process become indistinguishable when the number of clients and servers is large enough.
Using (1), the equations can be written as follows:
where we have singled out the rates due to self loops .
The fluid approximation for the load balancer is:
Based on the solution to Eq. 3, which directly provides queue-length estimates, one can derive other important performance metrics such as throughput, utilization, and response time. See, for instance [57, 56] for a study of these results in a process algebra , and [58, 54, 55] for applications to layered queueing networks .
In the remainder of this paper, we shall focus on QNs that do not have self loops (i.e., a client served at a queue cannot re-enter the same queue immediately), i.e., for . This is because we can show that, in the fluid approximation, for each , can be chosen freely as long as we adjust each with and . More formally, we can prove the following theorem.
For each , stochastic matrix and where (3) holds, there exist and such as for each :
Available in Appendix A. ∎
Thus, using the fluid approximation, for each network with self loops there is another one without them which cannot be distinguished. To identify a specific network among them, we need to know the self-loop values.
3 Learning methodology
As apparent in both Equations (1) and (3), a QN features routing probabilities and service demands as multiplicative factors in the defining dynamical equations. If we wish to learn a QN by assuming that both quantities are unknown, we are faced with a nonlinear (i.e., polynomial) optimization problem. Here we propose an RNN in order to estimate these parameters. We develop an RNN architecture which encodes the QN dynamics in an interpretable fashion, i.e., by associating the weights of the RNN with QN parameters such as concurrency levels, routing probabilities, and service rates.
3.1 ODE Discretization
We first obtain a time-discrete representation of the fluid approximation such that each time step is associated with a layer of the RNN. In matrix notation, for an arbitrary QN the fluid approximation is given by:
where is the -dimensional vector of queue lengths at time . We consider a finite-step approximation of the above ODE for a small , obtaining:
Finally, this can be rewritten as
where , is the identity matrix of appropriate dimension, and is the operator where if , then .
3.2 RNN Encoding
The discretization (4) of the fluid approximation of the QN admits a direct encoding as an RNN. It consists of an -dimensional input layer that corresponds to the initial condition of the QN. The RNN has cells, with the -th cell computing the estimate of the queue length at time , denoted by (see Fig. 2). That is, the -th cell computes the quantity , where, according to (4), estimates as .
With this set up, we will have to learn the matrix (made of weights, since the diagonal is empty) and the vector (made of weights).
The main goal of this methodology is to learn the actual parameters of the network. Therefore, we enforce some feasibility constraints, namely we require that rows sum up to (such that is a stochastic matrix), absence of self loops and (such that the speed of the stations is non-negative). The non-negativity of the weights is enforced in the framework by clamping the candidate values within the range ; stochasticity of is guaranteed by dividing each weight by the sum of the weights in the corresponding row; the absence of self loops is achieved by setting as a constant. This approach puts our work in the explainable machine learning research area , and it allows us to link each learned parameter with its role in the system. This link allows us to predict the behavior of the system under new conditions (what-if analysis). In contrast, a traditional approach to neural networks would not impose a model and constraints on the parameters, hence giving a read-only model which cannot be clearly interpreted. Indeed, without a direct association between parameters and physical quantities, we cannot study the system under new conditions unless learning a new model.
The RNN encoding for the -th cell (i.e., the queue length transient evolution at time ) of our running example is:
3.3 Input data
The RNN is trained over a set of traces. Each trace is made of vectors, indicated as . The -th component of each vector represents a sample of the queue length of each station at time . Since, as discussed, the fluid approximation can be interpreted as an estimator of the average queue lengths, each trace used in the learning process consists of measurements averaged over a number of independent executions started with the same initial condition; different traces rely on different initial conditions to exercise distinct behaviors of the system.
3.4 Learning function
The learning error function, denoted by , aims to minimize the difference between the queue lengths estimated by the RNN, , and the measurements . It is defined as follows:
where indicates the L1 norm. Essentially, it is a maximum relative error. Indeed, since we are studying closed QNs with fixed circulating clients, the quantity intuitively measures the proportion of clients (relative to their total number ) that are “misplaced” (i.e., which are allocated in a different station) at each time step. Since a misplaced client is counted twice (once when missing in a queue and once when is extra in another queue), we divide the norm by 2. Then, the overall error computes the maximum of such misplacements across all times.
Let us consider our running example by fixing ground-truth parameters as follows. During the learning phase, we studied the system with and predicted the behavior with , while we kept and unchanged at
Using the experimental set up that will be discussed in the next section, we generated the training dataset by collecting 50 traces, one for a different randomly generated initial population vector. Each trace was the average of 500 independent simulations recording the transient evolutions of the queue lengths. Figure 3 reports the comparison between queue lengths of the RNN-learned QN and the ground-truth one, showing very good accuracy on an instance of the training set (Figure 2(a)) as well as high predictive power of the model under unseen initial populations and concurrency levels which cause bottleneck shift and considerable longer transient dynamics (Figure 2(b)).
4 Numerical Evaluation
In this section we evaluate the effectiveness of the proposed approach by considering both synthetic benchmarks and a real case study. For all our tests, the RNNs were implemented using the Keras framework  with the TensorFlow backend . Learning was performed using a machine running the 4.15.0-55-generic Linux kernel on a Intel(R) Xeon(R) CPU E7-4830 v4 machine at 2.00GHz with 500 GB of RAM.
4.1 Synthetic case studies
For our synthetic tests we considered randomly generated networks of size and . For each case, we generated 5 QNs by uniformly sampling at random the entries of the routing probability matrices, the service rates in the interval , and the concurrency levels in the interval . For the training of each QN, we generated 100 traces, each being the average over 500 independent stochastic simulations (generated using Gillespie’s algorithm ). Each trace exercised the model with a distinct initial population vector such that the number of clients at each station was drawn uniformly at random from ; as a result, the total number of clients in the network varies across traces. For each network, learning was performed by equally splitting the 100 traces for training and validation, iterating Adam  with learning rate equal to , until the error computed on the validation set did not improve by at least in the last 50 iterations. On average, the learning took 74 minutes and 86 minutes for the cases and , respectively.
Two important parameters are the length of the trace, i.e., the time horizon of the stochastic simulations, and the choice of the discretization interval ; these are related with the number of cells in the RNN by . Longer time horizons lead to larger simulation (hence, training) runtimes. Too short traces might not expose the full dynamics of the system. Further, following basic facts about ODE discretization , the interval should be chosen small enough such that no important dynamics is lost across two successive time steps; thus, longer time horizons might need more time steps, hence more cells in the RNN. It is worth remarking that these considerations are model-specific. That is, the choice of such hyper-parameters must be carefully done depending on the specific QN under study.
For the synthetic case studies, we set and , hence .
We evaluate the predictive power of the learned QNs by performing two distinct “what-if” analyses under unseen configurations, by changing populations of clients and the concurrency levels of the stations, respectively.
What-if analysis over client population
We tested each of the randomly generated QNs with 100 new initial population vectors that were not used in the learning phase. We compared the averages (over 500 stochastic simulations) of the the ground-truth queue-length dynamics with those produced by the RNN-learned QN with those unseen initial conditions.
Figure 3(a) shows a scatter plot of the prediction error with respect to the total number of clients circulating in the system, reporting errors less than in all cases. The box-plots in Figure 3(b) show that there is no statistically significant difference between the errors for the diffrent sized models. Figure 5 compares the predicted and ground-truth queue lengths for the instance with the maximum prediction error, showing a very good generalizing power for the queue-length dynamics at all stations.
What-if over concurrency levels.
To validate the predictive power under varying concurrency levels, for each generated QN we found the station with the highest ratio between the steady state queue length and its number of servers (bottleneck), and added servers in steps of 20 to this station until it was not the bottleneck anymore. Then we compared the dynamics of the ground-truth model (i.e., simulated with the original and but with the new server concurrency levels) against those obtained by simulating the learned model with the new server concurrency levels. We considered the notion of prediction error as shown in Equation (5).
Figure 5(a) shows the results of this what-if study, reporting a prediction error less than across all instances. Also in this case, there is no statistically significant difference in the error statistics depending on the network size (see Figure 5(b)). Figure 7 plots the comparison of the queue-length dynamics of the what-if instance (i.e., with an unseen server concurrency level) that reported the maximum prediction error (i.e., ) against the original ones (i.e., prediction error of ). We can appreciate that the unseen concurrency levels do change the QN behavior dramatically, effectively switching the bottleneck from station 3 to station 2.
This result does support the combination of machine learning and white-box performance models by showing that, once learned, the QN can be used for evaluating the behavior of the model under execution scenarios for which the QN has not been trained.
4.2 Real case study
The benchmark used in this evaluation is based on an in-house developed web application that serves user requests with an input dependent load. We deployed the target application as a NodeJs  load-balancing system with three replicas. Figure 8 (left) depicts the system architecture. Component W represents the reference station, where clients enter the system by issuing requests to the load balancer LB, which redistributes them across the web servers uniformly. In the real system, such uniform assignment is achieved by fixing equal weights to the target nodes. Components C1, C2, and C3 represent the three web-server instances devoted to the actual processing of user requests (e.g., producing an HTML page). Each node in the Figure 8 is annotated with its concurrency level (i.e., the number of available processes), which we considered fixed parameters.
Specifically, we implemented W as a multi-threaded Python program. Each thread runs an independent concurrent user (i.e., one of the processes) that iteratively accesses the system, sleeping for an exponentially distributed delay between subsequent requests; LB is a single-threaded NodeJs web server which act as a randomized load balancer. Finally, C1, C2 and C3 are multi-threaded NodeJs Clusters
Similarly to , we collected the queue length traces used as input of the learning process (see Section 3) by parsing the access logs generated by each component of the system. However, other monitoring solutions could be used, based for instance on recording the TCP backlog . With this set-up, we were able to sample data with a measurement step s, which turned out to be sufficient for observing the transient dynamics of each component without altering the application behavior. The replication package for this evaluation is publicly available at https://zenodo.org/record/3679251.
Model Learning: We built the training dataset as a collection of queue length traces produced by the target application under 50 different initial population vectors where each station had a number of clients drawn uniformly at random between 0 and 30. For each such initial population vector the trace consisted of the average queue-length dynamics over 500 independent executions.
The target model of the learning process is reported in the right side of Figure 8. In particular, components C1, C2, C3 are modeled by queuing stations , , , while both the workload generator W and the load balancer LB are abstracted by the same station , since the delay introduced by LB is negligible with respect to the other components of the network. All the parameters of the resulting QN were considered parameters to be learned by the RNN.
Similarly to the synthetic case, the collected traces were split into two halves for training and validation, respectively. We used Adam  as the learning algorithm with learning rate equal to and iterated until the error computed on the validation set did not improve at least in the last 50 iterations. With this, the system parameters were learned in 27 minutes on average, with a validation error of 3.89%.
What if analysis: In the following we evaluate the predictive power of the RNN learned QN under an unseen number of clients, concurrency levels and routing probabilities. Differently from the synthetic case, here we emulate a concrete usage scenario in which an initially hidden performance bottleneck is discovered and solved only relying on the insights given by the learned model. For doing so, we exercised both the QN model and the real system under an increasing number of clients (here each simulation averaged over 300 simulation runs instead of 500 since for evaluating the what-if analysis less runs are needed) by a factor with respect to an initial population which had 26 circulating clients. Figure 9 reports the numerical results of this evaluation, showing a trend that induces a saturation condition in station . Overall, the prediction error of the RNN is less than 10% across all instances.
In Figure 10 we report two different strategies that can be used in order to remove the bottleneck: we reevaluated both the learned model and the real system starting from the case (see Figure 8(c)), varying either the number of servers or the load-balancing weights/routing probabilities. Figure 9(a) shows the dynamics of the system when the number of servers of is increased from 5 to 8, Figure 9(b) reports the what-if scenario in which we change the load distribution strategy from a uniform probability distribution to one where stations , , and are targeted with probability , and , respectively. Consistently with the intuition, both what-if instances show a lighter pressure (i.e., smaller queue length) at . Furthermore, both situations are well predicted by the RNN, yielding an accuracy error of ca. 6% with respect to the real system dynamics.
5 Related work
In this section we relate against techniques related to the following lines of research: performance prediction from programs, generation of performance models from programs, and estimation of parameters in QNs.
5.1 Program-driven Performance Prediction
A line of work focuses on the derivation of performance predictions from code analysis. PerfPlotter uses program analysis (specifically, probabilistic symbolic execution ) to generate a performance distribution, i.e., the probability distribution function of a performance metric such as response time . Thus, the result of the overall analysis is a quantitative model but it is not predictive. Furthermore, the approach applies to single-threaded applications, hence important performance-influencing sources such as threads contention cannot be captured.
Other related approaches consist in predicting performance models using black-box methods. They are particularly relevant for variability-intensive systems, where they relate configuration settings in a software system with their performance impact [48, 47]. Machine-learning techniques have been used also in this case to build the predictive model [24, 47, 59, 30]. For instance, in  the system model is assumed to be a linear combination of binary variables (e.g., tree structured models), each of them denoting the presence or the absence of a feature. Then the performance model is computed by means of linear regression over pairs of configurations and measured performance indices. The influence of possible feature interactions is embedded in the model by introducing fresh variables so as to preserve the linear structure of the model. As discussed in , these black-box approaches can be seen as complementary to ours, which can provide a reliable mathematical abstraction by which performance can be explicitly associated to software components, thus increasing the explanatory power of the prediction.
5.2 Program-driven Generation of Performance Models
While model-driven approaches to software performance have been researched quite intensively , program-driven generation of performance models has been less explored, and has been concerned with specific kinds of applications. Indeed, the early approach by Hrischuk et al. is concerned with the generation of software performance models (specifically, layered queuing networks ) from a class of distributed applications whose components communicate solely by remote procedure calls . Brosig et al. derive a component-based performance model from applications running on the Java EE platform [11, 10]. Tarvo and Reiss develop a technique for the extraction of discrete-event simulation models from a class of multi-threaded programs covering task-oriented applications, whereby the business logic consists in assigning a given workload (i.e., a task) to a number of worker threads from a pool . Their use of a simulation model as opposed to an analytical model is justified by the difficulty in building the latter, especially to model such diverse performance-related phenomena as queuing effects, inter-thread synchronization, and hardware contention. This is indeed the limitation that we aim to overcome with our approach, by building the analytical model automatically from measurements.
5.3 Estimation of service demands in queuing networks
Most of the literature concerning the estimation of QN parameters focuses on service demands. In particular, it considers the situation when the system is in the steady-state, i.e., when a sufficiently large amount of time has passed such that its behavior does not depend on the initial conditions . Mathematically, the assumption of a steady-state regime enables the leveraging of a wealth of analytical results for QNs . Based on these are several estimation methods using techniques such as linear regression , quadratic programming , non-linear optimization [40, 5], clustering regression , independent component analysis , pattern matching , Gibbs sampling [60, 51], and maximum likelihood .
The main advancement of our approach with respect to the state of the art is the ability to learn the whole model, i.e., both the service demands and the QN topology (via the routing probabilities). In addition, since it uses an ODE representation, it does not make assumptions about the stationarity of the system; indeed, we do train our RNN using traces that include the transient dynamics. Actually, our approach uses the same QN model as the service-demand estimation method recently proposed in , which is also based on fluid approximation.
Another difference with practical implications regards the type of data using for the estimation. Approaches such as [39, 16, 46, 31, 17] require measurements of quantities that may be difficult to obtain. For example, utilization metrics may not be available to the user when there is no complete information about the underlying hardware stack, for instance in a virtualized system running on a Platform-as-a-Service environment. Instead, measuring queue-length samples only has been regarded as more advantageous [61, 27], since this information can be often obtained from application logs or by means of operating system calls.
We presented a novel methodology for learning queuing network (QN) models of software systems. The main novelty lies in the encoding of the QN as an explainable recurrent neural network where inputs and weights are associated to standard queuing network inputs and parameters. We reported promising results on synthetic examples and on a real case study, where the maximum discrepancy between the dynamics predicted by the learned models and those computed through the ground truth is less than the % when the system is evaluated under unseen configurations that are not included in the training set. We plan to extend our technique for capturing more complex models and systems, such as mixed multi-class and layered QNs, and to explore other learning methodologies such as neural ODEs  and residual networks . Moreover, in order to improve the accuracy of the learned models and to reduce the simulation time, we plan to investigate active learning techniques that enable an informed sampling of the initial conditions .
Acknowledgements: This work has been partially supported by the PRIN project “SEDUCE” no. 2017TWRCNB.
Appendix A Appendix
Proof of Theorem 1.
We construct and as follows:
We prove that, for each we have and . Then (a) follows by substitution.
We now consider the case .
We now consider the case . We remark that, in this case, if .
The point (b) is true by definition of . Statement (c) can be shown as follows. When :
where the last statement follows because , . When :
Statement (d) can be shown observing that , (since ) and . Statement (e) can be shown observing that , and . ∎
- Abadi, M., Agarwal, A., and et Al., P. B. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
- Aleti, A., Buhnova, B., Grunske, L., Koziolek, A., and Meedeniya, I. Software architecture optimization methods: A systematic literature review. IEEE Trans. Software Eng. 39, 5 (2013), 658–683.
- Arcelli, D., Cortellessa, V., Filieri, A., and Leva, A. Control theory for model-based performance-driven software adaptation. In QoSA (2015), pp. 11–20.
- Ascher, U. M., and Petzold, L. R. Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. SIAM, 1988.
- Awad, M., and Menasce, D. A. Deriving parameters for open and closed qn models of operational systems through black box optimization. In ICPE (2017).
- Balsamo, S., Di Marco, A., Inverardi, P., and Simeoni, M. Model-based performance prediction in software development: A survey. IEEE Trans. Software Eng. 30, 5 (2004), 295–310.
- Balsamo, S., and Marzolla, M. Performance evaluation of UML software architectures with multiclass queueing network models. In WOSP (2005).
- Bolch, G., Greiner, S., de Meer, H., and Trivedi, K. Queueing networks and Markov chains: modeling and performance evaluation with computer science applications. Wiley, 2005.
- Bortolussi, L., Hillston, J., Latella, D., and Massink, M. Continuous approximation of collective system behaviour: A tutorial. Performance Evaluation 70, 5 (2013), 317–349.
- Brosig, F., Huber, N., and Kounev, S. Automated extraction of architecture-level performance models of distributed component-based systems. In ASE (2011).
- Brosig, F., Kounev, S., and Krogmann, K. Automated extraction of Palladio component models from running enterprise Java applications. In VALUETOOLS (2009).
- Chen, B., Liu, Y., and Le, W. Generating performance distributions via probabilistic symbolic execution. In ICSE (2016).
- Chen, T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K. Neural ordinary differential equations. In Advances in neural information processing systems (2018), pp. 6571–6583.
- Chollet, F., et al. Keras. https://keras.io, 2015.
- Cortellessa, V., Marco, A. D., and Inverardi, P. Model-Based Software Performance Analysis. Springer, 2011.
- Cremonesi, P., Dhyani, K., and Sansottera, A. Service time estimation with a refinement enhanced hybrid clustering algorithm. In International Conference on Analytical and Stochastic Modeling Techniques and Applications (2010), Springer, pp. 291–305.
- Cremonesi, P., and Sansottera, A. Indirect estimation of service demands in the presence of structural changes. Performance Evaluation 73 (2014), 18–40.
- Di Marco, A., and Inverardi, P. Compositional generation of software architecture performance QN models. In WICSA 2004 (June 2004), pp. 37–46.
- Franks, G., Al-Omari, T., Woodside, M., Das, O., and Derisavi, S. Enhanced modeling and solution of layered queueing networks. IEEE Trans. Software Eng. 35, 2 (2009), 148–161.
- Garcia, J., Krka, I., Mattmann, C., and Medvidovic, N. Obtaining ground-truth software architectures. In ICSE (2013), pp. 901–910.
- Geldenhuys, J., Dwyer, M. B., and Visser, W. Probabilistic symbolic execution. In ISSTA (2012), pp. 166–176.
- Gillespie, D. T. Stochastic simulation of chemical kinetics. Annual Review of Physical Chemistry 58, 1 (2007), 35–55.
- Graham, S. L., Kessler, P. B., and Mckusick, M. K. Gprof: A call graph execution profiler. In Proceedings of the 1982 SIGPLAN Symposium on Compiler Construction (SIGPLAN’82) (1982), pp. 120–126.
- Guo, J., Czarnecki, K., Apel, S., Siegmund, N., and Wasowski, A. Variability-aware performance prediction: A statistical learning approach. In ASE (2013).
- Hillston, J. A Compositional Approach to Performance Modelling. Cambridge University Press, 1996.
- Hrischuk, C., Rolia, J., and Woodside, C. M. Automatic generation of a software performance model using an object-oriented prototype. In MASCOTS (1995).
- Incerto, E., Napolitano, A., and Tribastone, M. Moving horizon estimation of service demands in queuing networks. In MASCOTS (2018).
- Incerto, E., Tribastone, M., and Trubiani, C. Symbolic performance adaptation. In Proceedings of the 11th International Symposium on Software Engineering for Adaptive and Self-managing Systems (SEAMS) (2016).
- Incerto, E., Tribastone, M., and Trubiani, C. Software performance self-adaptation through efficient model predictive control. In ASE (2017).
- Jamshidi, P., Velez, M., Kästner, C., and Siegmund, N. Learning to sample: exploiting similarities across environments to learn performance models for configurable systems. In ESEC/FSE (2018).
- Kalbasi, A., Krishnamurthy, D., Rolia, J., and Richter, M. Mode: Mix driven on-line resource demand estimation. In Proceedings of the 7th International Conference on Network and Services Management (2011), International Federation for Information Processing, pp. 1–9.
- Kaltenecker, C., Grebhahn, A., Siegmund, N., Guo, J., and Apel, S. Distance-based sampling of software configuration spaces. In Proceedings of the 41st International Conference on Software Engineering (2019), IEEE Press, pp. 1084–1094.
- Kingma, D. P., and Ba, J. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015), Y. Bengio and Y. LeCun, Eds.
- Kowal, M., Schaefer, I., and Tribastone, M. Family-based performance analysis of variant-rich software systems. In Fundamental Approaches to Software Engineering (FASE) (2014), pp. 94–108.
- Kowal, M., Tschaikowski, M., Tribastone, M., and Schaefer, I. Scaling size and parameter spaces in variability-aware software performance models. In ASE (2015), pp. 407–417.
- Koziolek, H. Performance evaluation of component-based software systems: A survey. Performance Evalutation 67, 8 (2010), 634–658.
- Kurtz, T. G. Solutions of ordinary differential equations as limits of pure Markov processes. In J. Appl. Prob. (1970), vol. 7, pp. 49–58.
- Litoiu, M. Panel: Ai and performance. In International Conference on Performance Engineering (ICPE) (2019).
- Liu, Z., Wynter, L., Xia, C. H., and Zhang, F. Parameter inference of queueing models for IT systems using end-to-end measurements. Performance Evaluation 63, 1 (2006), 36–60.
- Menasce, D. A. Computing missing service demand parameters for performance models. In Int. CMG Conference (2008), pp. 241–248.
- Mitchell, T. M. Machine learning. McGraw Hill series in computer science. McGraw-Hill, 1997.
- Object Management Group. UML Profile for Modeling and Analysis of Real-Time and Embedded Systems (MARTE). Beta 1. OMG, 2007. OMG document number ptc/07-08-04.
- Pacifici, G., Segmuller, W., Spreitzer, M., and Tantawi, A. CPU demand for web serving: Measurement analysis and dynamic estimation. Performance Evaluation 65, 6-7 (2008), 531–553.
- Pearlmutter, B. A. Learning state space trajectories in recurrent neural networks. Neural Computation 1, 2 (1989), 263–269.
- Samek, W., Wiegand, T., and Müller, K. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. CoRR abs/1708.08296 (2017).
- Sharma, A. B., Bhagwan, R., Choudhury, M., Golubchik, L., Govindan, R., and Voelker, G. M. Automatic request categorization in internet services. ACM SIGMETRICS Performance Evaluation Review 36, 2 (2008), 16–25.
- Siegmund, N., Grebhahn, A., Apel, S., and Kästner, C. Performance-influence models for highly configurable systems. In ESEC/FSE (2015).
- Siegmund, N., Kolesnikov, S. S., Kästner, C., Apel, S., Batory, D., Rosenmüller, M., and Saake, G. Predicting performance via automated feature-interaction detection. In ICSE (2012), pp. 167–177.
- Spinner, S., Casale, G., Brosig, F., and Kounev, S. Evaluating approaches to resource demand estimation. Performance Evaluation 92 (2015), 51–71.
- Stewart, W. J. Performance Modelling and Markov Chains. In SFM (2007), pp. 1–33.
- Sutton, C., and Jordan, M. I. Bayesian inference for queueing networks and modeling of Internet services. The Annals of Applied Statistics (2011), 254–282.
- Tarvo, A., and Reiss, S. P. Automated analysis of multithreaded programs for performance modeling. In ASE (2014).
- Tribastone, M. Relating layered queueing networks and process algebra models. In WOSP/SIPEW (2010).
- Tribastone, M. A fluid model for layered queueing networks. IEEE Transactions on Software Engineering 39, 6 (2013), 744–756.
- Tribastone, M., Ding, J., Gilmore, S., and Hillston, J. Fluid rewards for a stochastic process algebra. IEEE Trans. Software Eng. 38 (2012), 861–874.
- Tribastone, M., Gilmore, S., and Hillston, J. Scalable differential analysis of process algebra models. IEEE Transactions on Software Engineering 38, 1 (2012), 205–219.
- Tribastone, M., Mayer, P., and Wirsing, M. Performance prediction of service-oriented systems with layered queueing networks. In Leveraging Applications of Formal Methods, Verification, and Validation (2010), T. Margaria and B. Steffen, Eds., vol. 6416 of Lecture Notes in Computer Science, Springer, pp. 51–65.
- Valov, P., Petkovich, J., Guo, J., Fischmeister, S., and Czarnecki, K. Transferring performance prediction models across different hardware platforms. In ICPE (2017).
- Wang, W., and Casale, G. Bayesian service demand estimation using Gibbs sampling. In MASCOTS (2013).
- Wang, W., Casale, G., Kattepur, A., and Nambiar, M. Maximum likelihood estimation of closed queueing network demands from queue length data. In ICPE (2016).
- Woodside, M., Franks, G., and Petriu, D. C. The future of software performance engineering. In Proceedings of the Future of Software Engineering (FOSE) (2007), pp. 171–187.
- Zagoruyko, S., and Komodakis, N. Wide residual networks. arXiv preprint arXiv:1605.07146 (2016).
- Zaparanuks, D., and Hauswirth, M. Algorithmic profiling. In PLDI (2012), pp. 67–76.