Big Data Analytics for QoS Prediction Through Probabilistic Model Checking
Abstract
As competitiveness increases, being able to guaranting QoS of delivered services is key for business success. It is thus of paramount importance the ability to continuously monitor the workflow providing a service and to timely recognize breaches in the agreed QoS level. The ideal condition would be the possibility to anticipate, thus predict, a breach and operate to avoid it, or at least to mitigate its effects. In this paper we propose a model checking based approach to predict QoS of a formally described process. The continous model checking is enabled by the usage of a parametrized model of the monitored system, where the actual value of parameters is continously evaluated and updated by means of big data tools. The paper also describes a prototype implementation of the approach and shows its usage in a case study.
\keywordsBig Data Analytics, QoS Prediction, Model Checking, SLA compliance monitoring
I Introduction
The serviceoriented computing paradigm has been changing the way of creating and developing softwarebased services. This paradigm is the foundation of the Utility Computing in which both hardware resources and software functionalities are made available according to the asaService (aaS) model [1].
This model allows developing new services by integrating and reusing existing ones, i.e. third party, or legacy systems.
The result of such an integration is services being provided by extremily complex workflows [2, 34]. Multiple parties are thus accountable for the successuful delivery of the service. The terms of service promised to the enduser of the provided service are described by means of an agreement normally named Service Level Agreement (SLA)[3, 4]. The terms of service regulating the relationship among parties collaborating to deliver the final service are named Operational Level Agreement (OLA). Both SLAs and OLAs are ultimatelly describing a QoS level to be matched while providing the service.
Independentrly of the origin of QoS terms (SLA or OLA), it is of paramount importance the ability to continuously monitor the workflow providing the service and to timely recognize breaches in the agreed QoS levels [5, 30]. The ideal condition would be the possibility to anticipate, thus predict, a breach and operate to avoid it, or at least to mitigate its effects.
In this paper we propose a new QoS prediction approach which combines runtime monitoring with a modelbased analysis method such as probabilistic modelchecking.
In our approach we use probabilistic modelchecking to analyze a probabilistic model of the monitored workflow. To limit the state explotion problem, characteristic of modelchecking analysis, we assume that the system analysed is described as a parametric model. At every moment of the analysis the actual value of parameters is retrieved, via runtime monitoring of the realsystem, and it is used with the parametric model. The actualized model is used by the model checker which estimates the probability that in the very next future, the system will reach a status corresponding to an SLA violation.
Since the amount of data retrieved during system monitoring can raise up very quickly [6], we use big data analytics solutions to guarantee the realtime evaluation of model parameters.
To validate the proposed approach we have developed a prototype of the QoS prediction framework and we have demonstrated its usage against a case study in the field of Smart Grids [31, 32].
The paper is organized as follows. Section II describes related work on QoS prediction. In section III we present the overall architecture enabling our QoS prediction approach. Section IV illustrates the case study to which we applied the proposed methodology, while section IVA describes the prototype developed to validate our approach. Section V closes the paper with conclusions and future work.
Ii Related Work
QoS prediction is surveyed in [22, 23, 29, 25]. A prediction performance model is treated in [22], where the authors exploit the Markovian Arrival Process (MAP) and a MAP/MAP/1 queuing model as a means to predict performance of servers deployed in Cloud infrastructure. Although in our Smart Grid case study we use a M/M/1 queuing model, our QoS prediction methodology does not rely on a specific model which, therefore, could be adapted as needed. In [23] is proposed a predictionbased resource measurement which use Neural Networks and Linear Regression as techniques to forecast future resource demands. A regression model is used also in [29] to produces numerical estimation of Service Level Objectives (SLOs) so as to predict SLA violations at runtime. Similarly, in [25] is presented the PREvent framework, a system which uses a regression classifier to predict violation, but not details are given about the performance of the method. In [18, 26, 19] the QoS requirements are controlled by solving a QoS optimization problem at runtime. Particularly, in [18] a linear programming optimization problem is adopted to define a runtime adaptation methodology for meeting QoS requirements of serviceoriented systems, whereas a multiobjetive optimization problem is proposed in [26, 19] to develop QoS adaptive servicebased systems to guarantee predefine QoS attributes.
A collaborative method is proposed in [27] in which performance of cloud components are predicted based on usage experiences. Although this method could be appropriate for QoS indicators from the user perspective, is impractical in general case where QoS are businessoriented.
A QoS prediction by using a Model Checking solution is proposed in [24, 28]. Gallotti et al. in [24] propose an approach named ATOP  i.e. from Activity diagrams TO Prism models  which from an abstract description of service compositions (activity diagram) derives a probabilistic model to feed the PRISM tool for the evaluation phase. However, unlike our solution this is a methodology conceived for evaluating system at designtime. Similar to our work, in [28] the authors propose a twophase method involving monitoring and prediction with the aim of monitoring at runtime the reliability of compositional Web services which exhibit random behaviour. Although this method also takes advantage of the probabilistic model checking technique, it focuses mainly on reliability by providing a DTMCbased Markovian model. In contrast, we propose a general CTMC probabilistic model for performance indicators in which both state and transition are parameterised, resulting in a model adaptable at runtime.
Iii Architectural Overview
This section describes the approach behind our solution for QoS prediction. An high level architecture overview is represented in fig. 1.
Given a system/process to be monitored for QoS compliance with a set of SLAs and OLAs, we assume that a formalized model of the system is made availabe. Such a model is based on a statetransition description which is able to capture the evolution of Key Performance Indicators (KPIs) over time. Moreover states and transitions must be expressed as parameters. The KPIs can be inferred by the SLAs and OLAs defining the expected QoS. They can thus be used to identify the conditions of violation of the expected QoS. Such conditions are represented by some final states in the statetransition model.
At runtime, the reference system is continously monitored and collected data are used to evaluate the actual value of model parameters. Once the model has been populated with estimated values of the parameters, it is processed by the model checking software. In our prototype we have used PRISM [8] which is an opensource probabilistic model checker which supports the analysis and checking of a wide number of model types. The model checker explores states that can be reached since current state in a fixed number of transitions (depending on the the desired prediction timelapse). If one of the states representing a violation is likely to be reached with a probability higher than a fixed threshold (violation alarm threshold), than a QoS breach is predicted.
It is worth noting that the usage of a parametric model, which is continuously updated, and the fixed timelapse used for the prediction, allow limiting the wellknown state explosion problem due to the exhaustive states exploration operated by modelcheckers. One further optimization could be operated by pruning those braches including states reachable with a probability lower than the violation alarm threshold.
As for the parameters evaluation, continous monitoring of a complex system may require the realtime analysis of huge amounts of data. Such requirement can be matched by using advaced Big Data Techniques and tools. In particular, in our prototype we used a Complex Event Processor (CEP) to infeer parameters value from collected data. In an advance prototype the Big Data layer could be used to support the modelchecking process.
To guarantee that the automatic procedure be both efficient and consistent, two conditions are to be held:

the size of the state space of the model has to be sufficient to perform the modelchecking analysis in a time that is compatible with the updating time of the QoS data of the modelled system

the evaluation of the model parameters should always allow the representation of the critical QoS states to be monitored.
The first condition is key for obtaining a near realtime QoS prediction system. Indeed, it requests to balance the size of the QoS model at runtime by taking into account both the real time constraint imposed by the monitored service and the time spent to model check. A preliminary analysis during the model definition has to be conduct in order to ensure that this condition is still true even though the model is fully expanded (i.e. no pruning of its state space is considered). The second condition allows verifying that, if narrowed, the model still includes states of the real system related to critical QoS values (e.g. warning and/or violation states).
Consequently, our methodology considers the following steps:

Specification of the parameterised QoS stochastic model and QoS constraints to monitor

Realtime data analysis and parameters synthesis

Generation of the internal statetransition Model representation

Execution of the Probabilistic ModelChecking to quantify the likelihood of future QoS state

QoS Verification
In the first step we define a stochastic model which is suited to the kind of properties we are interested in monitoring. In this paper we show a case study, from the SmartGrid domain, modelled by means of a CTMC. The steps 25 are involved in an endless loop which makes our approach adaptive. In particular, the second step needs to analyse data received by the CEP so to determine the parameters of the model, and computes the current KPIs value. The third step generates the finite statetransition representation of the system model on which performing modelchecking in the fourth step. Finally, the fifth step deals with verifying the QoS on the basis of the current KPIs and/or quantification of future QoS states.
Iiia QoS Properties Specification
In a previous work we introduced the concept of Quality Constraint (QC) [10] as a mean to express constraints on KPIs. A QC is defined as a boolean condition on a single KPI. The language we used to specify QCs is an intervalbased version of the Lineartime Temporal Logic (LTL). Particularly, in [10] we introduced two temporal operators along and within which present the following semantic:

along : is true in any time instant belonging to

within : there exists at least a time instant in which P is true
Thus, the along and within are, respectively, the restriction of the Linear Temporal Logic (LTL) âgloballyâ (G) and âeventuallyâ (F) operators to the interval . A QC without temporal operator is interpreted as an expression to be verified all along the lifetime of the monitored system, hence resulting useful for specifying safety property.
It is worth noting that in the context of runtime monitoring we check properties against execution traces of the system, i.e ordered sequences of past (up to now) states. Although in this way we are able to recognise a violation as soon as it happens, we do not have any means to evaluate if it will happen and when in the future.
In this modelbased approach we tackle this issue by defining Predictive Indicators (PIs) upon the monitored KPIs. A PI is a numerical indicator which statistically quantify the probability for a KPI to be in a certain state (i.e. a range of values) in a predetermined time instant in the future. Taking advantage of probabilistic modelchecking we define such PIs as probabilistic temporal formulae (in the logic suitable for the underlying model) which can be evaluated over all possible evolution considered in the service model. Furthermore, as numerical indicators PIs can be monitored by means of specifying QCs. To this purpose we have extended our QC language with the eval() operator which accept temporal logic formula to be evaluated by means of a model checker tool. As a predictive quality indicator, eval() can be monitored by specifying a Quality Constraint as we will see in the Smart Grid case study.
IiiB Performance Model
In this work we focus our attention on KPIs which refers to quantifiable service performances, i.e. resource utilization, number of request served, etc. To better fit our case study, we select a M/M/1 queuing model to represent these type of indicators. The intuition is that such indicators represent resources whose arrival usage requests are determined by a Poisson process of parameter , whereas the resource service time follow an exponential distribution of parameter .distributed arrives with a distribution usage is approximated queuing model by having where arrivals are determined by a Poisson process and job service times have an exponential distribution.
Let us assume a KPI as a variable whose values can range in set seen as:
where the subsets , and have the following meaning:
 :

it is the set of Admissible Values takes when the system is in a state which fulfills all the QCs defined on the KPI
 :

it is the set of Critical Values, i.e. limits/targets values, on which the system still meets the required quality but beyond which this is no longer true.
 :

it is the set of Inadmissible Values takes when the system is in a state which does not fulfill at least a QCs defined on the KPI
We assume that is totally ordered and its subsets are disjoint, that is:
The fig.2 illustrates our general queueing model. We consider a queue as a discrete representation of the set . In particular, the is partitioned into a sequence of disjoint intervals , with . Moreover, for , for all , we have . This helps to preserve the semantic distinction among the subsets , and . Hence, we can write if .
Thus, let be the total amount of elapsed time from the beginning of KPI monitoring, the time window, with , in which we take into account the KPI variations, and , two sequential values of the KPI of interest belonging respectively to the interval and with . We adapt the queueing model by interpreting:

the queue length as representing the interval in which lies

given all the transitions to with (resp. ) observed up to the time instant , the increment (resp. decrement) rate (resp. ) is the ratio of the sum of all increments (resp. decrement ) over the time window we want to consider for the rate updating.
Therefore, the queue length increases from to for with a rate and decreases from to for with a rate . An M/M/1 queue model can be described by an CTMC. In this way, by using the CSL as a language to formally specifying properties, we employ the probabilistic modelchecking technique to conduct a quantitative analysis on the KPIs by means of their queue representation.
Iv The Smart Grid Case Study
The proposed QoS Prediction approach has been validated with respect to a Smart Grid (SG) case study.
SG is the integration of the IT infrastructure into a traditional power grid in order to continuously exchange and process information to better control the production, consumption and distribution of electricity. For this purpose Smart Meters (SMs) devices are used to measure variations of electric parameters (e.g. voltage, power, etc.) and send such data to a computational environment which, in turn, analyse and monitor it in a realtime fashion.
In this case study, our tool performs the remote monitoring on behalf of an Energy Distributor (ED) which purchases electric power from Energy Producers (EPs) and retails it to Energy Consumers (ECs). The primary goal of the ED is to balance the purchased electric power with respect to the variations of power demand.
The SG Model.
For the sake of simplicity we have built a basic model which represents an ED, EP (or aggregated values of many EPs) and EC (or aggregated values of many ECs) as a threequeue system networked as in fig. 3.
Each queue is a discrete representation of the realvalued KPI to be modelled. The PRISMbased model we define implements the queues , , and with queue length and transition rates as parameters. Furthermore, the sets , , are arranged as follows:
where , and and represent the minimum and maximum thresholds of the admissible and critical value sets.
Parameters Updating.
In the queuing model the queue length and the transition rate are updated as follows.
Two thresholds are set on both queue edges so that if the current state goes below the first or up the second, the queue length is doubled or halved respectively.
As for the updating of the transition rate, an Exponential Weighted Moving Average (EWMA) is applied on the first difference of the time series under analysis. Thus, let a time series, we compute the transition rate as follows:
(1) 
in which the initial value of is set to . The 1 is used for both the increment () and the decrement rate ().
QoS Data Extraction.
To tackle the Big Data issue, our architecture takes advantage of a Complex Event Processing (CEP) which could be performed on any dataintensive distributed framework (e.g. Hadoop). Such combination allows to extract, process and deliver (complex) data in realtime, empowering the QoS monitoring and prediction phases.
Following our case study, we show an example of a complex event to derive the balance indicator (BI) from the basic SmartMeterMeasureEvent originating from the Smart Meters of EPs and ECs:
insert into BalanceIndicatorEvent select (EP.measure  EC.measure) as index from EP.SmartMeterMeasureEvent as EP, Ψ EC.SmartMeterMeasureEvent as EC, select "range_i" from BalanceIndicatorEvent where index index >= I_min and <= I_max
The first query compute the BI and create the event BalanceIndicatorEvent
. The second query is a template used by our QoS Monitoring tool to generate the actual queries. They are used to classify at which range the index belongs to. Based on this data, the QoS Analyser component compute the transition rate from one range to another to be fed the PRISM model.
On the other side, a temporalbased query is used for a realtime anomalous detection:
select measure, "CriticalValueMsg" from EP1.SmartMeterEvent.win:time(15 min) where measure < BASE_PROD
In this case we take advantage of the temporalbased capability of the CEP language. The select deliver a CriticalValueMsg
message based on the fact that a specific energy producer (EP1 in the example) is gone underproduction. The message is delivered to the QoS Monitoring which in turn perform the associated action, e.g. notify ED.
Quality Constraints to be Monitored.
Briefly we report only two types of QCs: the first is a safety property (neither within nor along operator specified) which assesses if the predicted violation probability in the next 15 minutes is more than 10%.
(2) 
The second QC guarantees to be notified if the probability of incurring in a violation state in the next 30 minutes is greater than 0.05 twice in a row (considering a measurement events every 15 minutes).
(3) 
Iva Validation
Queue length  #States  #Trans  BM time (s)  MC time (s)  Tot. time (s) 

20  13280  63476  0.15  0.38  0.53 
40  95360  466156  0.81  8.37  9.18 
60  310240  1528036  7.75  52.51  60.27 
80  721920  3569116  20.53  197.11  217.64 
100  1394400  6909396  42.80  492.28  535.089 
In our scenario we assume a balance range of 800 Mega Watt (MW), i.e. , and we firstly evaluate how much time the QoS prediction phase takes with respect to different model size (Table I). The table also reports the size of the model in terms of number of states and transitions. As expected by using a modelckecking technique, the time is exponential against the model size. However, as the last row shows, we can also observe that even in case of millions () of states and transitions  that means a finegrained discretisation  the total time is less than 9 minutes, hence still comparable with the updating rate usually considered for SGs.
We have selected a queue length of 40  i.e. a unit increment/decrement of the queue correspond to a 20MW of balance variation  and set these thresholds: , , , . Our tests are based on property 3 evaluated by simulating three different scenarios:
 Case A:

EPs inject in the grid as much energy as ECs need (balanced case).
 Case B:

ECs request less than EPs produce (overproduction).
 Case C:

The energy consumption request rises twice compared with the production rate (imbalanced condition).
Fig. 4 plots the violation probability estimated for such scenarios. For scenario A the violation probability varies in a symmetrical fashion around the balance point (i.e. queue length 20). The scenario B exhibits a higher probability in all the overproduction states (i.e. queue length less than the balance point), and a lower one for a large number of states representing the power grid overload (i.e. queue length greater than the balance point). This characteristic is emphasised in the third simulation which represents the imbalanced (overloaded in this case) scenario. In addition, we notice how in such anomalous conditions all minimum values of violation probability are higher than the other two scenarios.
V Conclusions and Future Work
To support Big Data analysis of QoS information, in this paper, we have proposed a QoS prediction framework which takes advantage of the qualitative and quantitative analysis performed by a probabilistic modelchecking technique. Our approach uses a parametric QoS model and performs a probabilistic modelchecking analysis in order to evaluate QoSrelated predictive indicators (PIs). In this way, prealert QoS states can be notified in advance, giving a greater control to the Service Provider to avoid, or at least manage, possible breaches of Service Level Agreements (SLAs) contracted with Service Consumers. We have realized and presented a validating prototype  built on top of the PRISM Model Checker, as well as experiments on a Smart Grid case study, which shows the effectiveness of our methodology and how, tuning the model parameters, the time required to model check is less than the time needed to receive updated QoS information from the monitored service. In the next future we plan to extend the experimental campaign validating our approach and to extend the usage of this framework to monitor security [33] and other nonfunctional aspects other than provided QoS.
Acknowledgment
This work has been partially supported by the TENACE PRIN Project (n. 20103P34XC) funded by the Italian Ministry of Education, University and Research. This work has been partially supported by the Embedded Systems in Critical Domains. This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 313034 (SAWSOC Project).
References
 [1] Tsai, W. T.: Serviceoriented system engineering: a new paradigm. In ServiceOriented System Engineering, 2005. SOSE 2005. IEEE International Workshop (pp. 36). IEEE. (2005)
 [2] Coppolino, Luigi; Romano, L.; Mazzocca, N.; Salvi, Sergio, ”Web Services workflow reliability estimation through reliability patterns,” Security and Privacy in Communications Networks and the Workshops, 2007. SecureComm 2007. Third International Conference on , vol., no., pp.107,115, 1721 Sept. 2007
 [3] Ferdinando Campanile, Luigi Coppolino, Salvatore Giordano, Luigi Romano: A business process monitor for a mobile phone recharging system. Journal of Systems Architecture  Embedded Systems Design 54(9): 843848 (2008)
 [4] Luigi Coppolino, Danilo De Mari, Luigi Romano, Valerio Vianello: SLA compliance monitoring through semantic processing. GRID 2010: 252258
 [5] Giuseppe Cicotti, Luigi Coppolino, Rosario Cristaldi, Salvatore D’Antonio, and Luigi Romano. 2011. QoS monitoring in a cloud services environment: the SRT15 approach. In Proceedings of the 2011 international conference on Parallel Processing (EuroPar’11). SpringerVerlag, Berlin, Heidelberg, 1524.
 [6] Zheng, Z., Zhu, J., Lyu, M.R.: Servicegenerated Big Data and Big DataasaService : An Overview. In Proc. 1st IEEE International Congress on Big Data, Santa Clara Marriott, CA, USA, June 27July 2, 2013
 [7] Sagiroglu, S., and Sinanc, D.: Big data: A review. In Collaboration Technologies and Systems (CTS), 2013 International Conference on (pp. 4247). IEEE. (2013)
 [8] Kwiatkowska, M., Norman, G., and Parker, D.: PRISM 4.0: verification of probabilistic realtime systems. In Proceedings of the 23rd international conference on Computer aided verification (CAV’11), Ganesh Gopalakrishnan and Shaz Qadeer (Eds.). SpringerVerlag, Berlin, Heidelberg, 585591 (2011)
 [9] Kwiatkowska, M., Norman, G., and Parker, D.: Stochastic model checking. In Formal methods for performance evaluation (pp. 220270). Springer Berlin Heidelberg. (2007)
 [10] Cicotti, G., DâAntonio, S., Cristaldi, R., and Sergio, A.: How to Monitor QoS in Cloud Infrastructures: The QoSMONaaS Approach. In Intelligent Distributed Computing VI (pp. 253262). Springer Berlin Heidelberg. (2013)
 [11] Katoen, J. P., Khattri, M., and Zapreevt, I. S.: A Markov reward model checker. In Quantitative Evaluation of Systems, 2005. Second International Conference on the (pp. 243244). IEEE. (2005)
 [12] Behrmann, G., David, A., Larsen, K. G., Hakansson, J., Petterson, P., Yi, W., and Hendriks, M.: UPPAAL 4.0. In Quantitative Evaluation of Systems, 2006. QEST 2006. Third International Conference on (pp. 125126). IEEE. (2006)
 [13] Sesic, A., Dautovic, S., Malbasa, V.: Dynamic Power Management of a System With a TwoPriority Request Queue Using ProbabilisticModel Checking. IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems. 27, 403â407 (2008).
 [14] Rabl, T., GÃ³mezVillamor, S., Sadoghi, M., MuntÃ©sMulero, V., Jacobsen, H. A., and Mankovskii, S.: Solving big data challenges for enterprise application performance management. Proceedings of the VLDB Endowment, 5(12), 17241735 (2012).
 [15] Gay, P., Pla, A., LÃ³pez, B., MelÃ©ndez, J., and Meunier, R.: Service workflow monitoring through complex event processing. In Emerging Technologies and Factory Automation (ETFA), 2010 IEEE Conference on (pp. 14). IEEE (2010).
 [16] Cuzzocrea, A., Song, I. Y., and Davis, K. C.: Analytics over largescale multidimensional data: the big data revolution!. In Proceedings of the ACM 14th international workshop on Data Warehousing and OLAP (pp. 101104). ACM (2011).
 [17] TITIRISCA, A. ETL as a Necessity for Business Architectures. Database Systems Journal BOARD, 3.
 [18] Cardellini, V., Casalicchio, E., Grassi, V., Lo Presti, F., and Mirandola, R.: Qosdriven runtime adaptation of service oriented architectures. In Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering (pp. 131140). ACM (2009).
 [19] Yau, S. S., Ye, N., Sarjoughian, H., and Huang, D. (2008, October). Developing servicebased software systems with QoS monitoring and adaptation. In Future Trends of Distributed Computing Systems, 2008. FTDCS’08. 12th IEEE International Workshop on (pp. 7480). IEEE.
 [20] Gao, H., Miao, H., Zeng, H.: Predictive Web Service Monitoring using Probabilistic Model Checking. Applied Mathematics & Information Sciences. 7, 139â148 (2013).
 [21] Leitner, P., Wetzstein, B., Rosenberg, F., Michlmayr, A., Dustdar, S., Leymann, F.: Runtime Prediction of Service Level Agreement Violations for Composite Services. 176â186 (2010).
 [22] PachecoSanchez, S., Casale, G., Scotney, B., McClean, S., Parr, G., Dawson, S.: Markovian Workload Characterization for QoS Prediction in the Cloud. 2011 IEEE 4th International Conference on Cloud Computing. 147â154 (2011).
 [23] Islam, S., Keung, J., Lee, K., Liu, A.: Empirical prediction models for adaptive resource provisioning in the cloud. Future Generation Computer Systems. 28, 155â162 (2012).
 [24] Gallotti, S., Ghezzi, C., Mirandola, R., Tamburrelli, G.: Quality Prediction of Service Compositions through Probabilistic Model Checking. Time. 119â134 (2008).
 [25] Leitner, P., Michlmayr, A., Rosenberg, F., Dustdar, S.: Monitoring, Prediction and Prevention of SLA Violations in Composite Services. 2010 IEEE International Conference on Web Services. 369â376 (2010).
 [26] Chen, T., Bahsoon, R., Theodoropoulos, G.: Dynamic QoS Optimization Architecture for Cloudbased DDDAS. Procedia Computer Science. 18, 1881â1890 (2013).
 [27] 1. Zhang, Y., Zheng, Z., Lyu, M.R.: RealTime Performance Prediction for Cloud Components. 2012 IEEE 15th International Symposium on Object/Component/ServiceOriented RealTime Distributed Computing Workshops. 106â111 (2012).
 [28] Gao, H., Miao, H., Zeng, H.: Predictive Web Service Monitoring using Probabilistic Model Checking. Applied Mathematics & Information Sciences. 7, 139â148 (2013).
 [29] Leitner, P., Wetzstein, B., Rosenberg, F., Michlmayr, A., Dustdar, S., Leymann, F.: Runtime Prediction of Service Level Agreement Violations for Composite Services. 176â186 (2010).
 [30] Luigi Coppolino, Salvatore DAntonio, Luigi Romano, Fotis Aisopos, Konstantinos Tserpes, Effective QoS Monitoring in Large Scale Social Networks, In Proceedings of the 7th International Symposium on Intelligent Distributed Computing (IDC 2013), , pp. 249259, Springer International Publishing, 2013. Prague, Czech Republic
 [31] Luigi Coppolino, Salvatore D’Antonio, Luigi Romano, Exposing vulnerabilities in electric power grids: An experimental approach, International Journal of Critical Infrastructure Protection, Available online 29 January 2014, ISSN 18745482, http://dx.doi.org/10.1016/j.ijcip.2014.01.003.
 [32] Luigi Coppolino, Salvatore D’Antonio, Ivano Alessandro Elia, and Luigi Romano. 2011. Security analysis of smart grid data collection technologies. In Proceedings of the 30th international conference on Computer safety, reliability, and security (SAFECOMP’11). SpringerVerlag, Berlin, Heidelberg, 143156.
 [33] Luigi Coppolino, Salvatore D’Antonio, Valerio Formicola, and Luigi Romano. 2011. Integration of a system for critical infrastructure protection with the OSSIM SIEM platform: a dam case study. In Proceedings of the 30th international conference on Computer safety, reliability, and security (SAFECOMP’11), Francesco Flammini, Sandro Bologna, and Valeria Vittorini (Eds.). SpringerVerlag, Berlin, Heidelberg, 199212.
 [34] Ceccarelli, A.; Vieira, M.; Bondavalli, A., ”A Testing Service for Lifelong Validation of Dynamic SOA,” HighAssurance Systems Engineering (HASE), 2011 IEEE 13th International Symposium on , vol., no., pp.1,8, 1012 Nov. 2011