Towards Statistical Prioritization for Software Product Lines Testing
Software Product Lines (SPL) are inherently difficult to test due to the combinatorial explosion of the number of products to consider. To reduce the number of products to test, sampling techniques such as combinatorial interaction testing have been proposed. They usually start from a feature model and apply a coverage criterion (e.g. pairwise feature interaction or dissimilarity) to generate tractable, fault-finding, lists of configurations to be tested. Prioritization can also be used to sort/generate such lists, optimizing coverage criteria or weights assigned to features. However, current sampling/prioritization techniques barely take product behavior into account. We explore how ideas of statistical testing, based on a usage model (a Markov chain), can be used to extract configurations of interest according to the likelihood of their executions. These executions are gathered in featured transition systems, compact representation of SPL behavior. We discuss possible scenarios and give a prioritization procedure illustrated on an example.
Software Product Line (SPL) engineering is based on the idea that products of the same family may be built by systematically reusing assets, some of them being common to all members whereas others are only shared by a subset of the family. Such variability is commonly captured by the notion of feature, i.e., an unit of difference between products. A product member of the SPL is a valid combination of features. Individual features can be specified using languages such as UML, while their inter-relationships are organized in a Feature Diagram (FD) . An FD thus (abstractly) describes all valid combinations of features (called configurations of the FD), that is all the products of the SPL.
In this paper, we are interested in SPL testing. As opposed to to classical testing approaches, where the testing process only considers one software product, SPL testing is concerned about how to minimize the test effort related to a given the SPL (i.e., all the valid products of the SPL). The size of this set is roughly equal to , where represents the number of features of the SPL. This number may vary from about 10 ( possible products) in small SPLs to thousands of features ( possible products) in complex systems such as the Linux kernel. Automated Model-Based Testing  and shared execution , where tests can be reused amongst products, are candidates to reduce such effort.
Still, testing all products of the SPL is only possible for small SPLs, given their exponential growth with the number of features. Hence, one of the main questions arising in such a situation is: How to extract and prioritize relevant products? Existing approaches consider sampling products by using a coverage criterion on the FD (such as all valid 2-tuples of features: pairwise [4, 5]) and rank products with respect with respect to coverage satisfaction (e.g. the number of tuples covered). An alternative is to label each feature with weights and prioritize configurations accordingly [6, 7]. These methods actually help testers to scope more finely and flexibly relevant products to test than using a covering criteria alone. Yet, these approaches only sample products based on the FD which does not account for product behavior: they are just configurations of the FD. Furthermore, assigning meaningful weights on thousands of features can be tricky if no other information is available.
Statistical testing  proposes to generate test cases based on a usage model represented by a Discrete Time Markov Chain (DTMC). This usage model represents the usage scenarios of the software as well as their probability. This allows one to determine the relative importance of execution scenarios (with respect to other). This paper explores the possibility of using statistical testing to sample and prioritize products of an SPL. The basic idea is to focus on “most probable” (respectively “less probable” products), i.e. products that are able to execute highly probable (respectively. improbable) traces of the DTMC. Since black box usage scenarios may not relate directly to features, we propose to use a compact representation of SPL behaviour, Featured Transition Systems (FTSs) to determine which traces are legal with respect to the SPL and associate related products. In fact, we construct another FTS, which maps to the selected traces. This FTS represent the behavior of the set of products of interest and is amenable to various testing and verification techniques.
In this section, we present the foundations underlying our approach: SPL modeling and statistical testing.
Ii-a SPL Modelling
A key concern in SPL modeling is how to represent variability. To achieve this purpose, SPL engineers usually reason in terms of features. In  Pohl et al. define features as end-user visible characteristics of a system. Relations and constraints between features are usually represented in a Feature Diagram (FD) . For example, Fig. (a)a presents the FD of a soda vending machine . A product derived from this diagram will correspond to a set of selected features. Here, corresponds to a machine that sells only tea and accept only dollar. FDs have been equipped with formal semantics , automated analyses and tools  for more than 20 years. A common semantics associated to a FD (noted ) is the set of all the valid products allowed by .
Ii-A1 Behavioural Modelling
Different formalisms may be used to model the behavior of a system. To allow the explicit mapping from feature to SPL behavior, Featured Transition Systems (FTSs)  were proposed. FTSs are Transition Systems (TSs) where each transition is labelled with a feature expression (i.e., a boolean expression over features of the SPL), specifying for a given FD in which products the transition may be fired. Thus it is possible to determine products that are the cause of a violation or a failed test. Formally, an FTS is a tuple where:
is a classical TS with a set of states, a set of actions, a set of transitions (where is sometimes noted ) and an initial state;
is a FD;
is a total function, labeling each transition with a function that associates to each valid product a boolean expression indicating if it may fire the transition or not. This function is noted using a feature expression (which is basically a boolean expression with features’ name as variables) meaning that only products that satisfy this feature expression may fire the transition. For instance: in Fig. (b)b indicates that only products that have not the feature may fire the and transitions.
The semantics of a given FTS is a function that associates each valid product with its set of finite and infinite executions, i.e. all the possible paths in the graph starting from the initial state available for this specific product. According to this definition, an FTS is actually a behavioral model of a whole SPL. Fig. (b)b presents the FTS modeling a vending machine SPL. For instance, transition is labelled with the feature expression . This means that only the products that do have the feature are able to execute the transition. This definition differs from the one presented in , where only infinite paths are considered. In a testing context, one may also be interested in finite paths.
Ii-B Statistical Testing
Whittaker and Thomason introduce the notion of usage model in  and define it as a TS where transitions are associated to probabilities. A probability on a transition represents the fact that if the system is in state , the transition has chances to be fired. Formally, a usage model will be represented by a DTMC, which is a tuple where :
is a TS;
is the probability matrix which gives for two states the probability for the system in state to go in the state ;
is the vector containing the probabilities to be in the initial state when the system starts, with the following constraint : ;
the total of the probabilities of the transitions leaving a state must be equal to 1.
Actions are just labels used to annotate transitions without changing the semantics of the DTMC. In our case, they are used to relate traces of the DTMC with executions of the FTS.
In our approach, we consider three models: a FD to represent the features and their constraints (in Fig. (a)a), an FTS to represent the behaviour of the SPL (in Fig. (b)b) and a usage model represented by a DTMC (in Fig. (c)c) with the following constraints:
The feature diagram of is ;
States, actions and transitions of are included in the states, actions and transitions (respectively) of : ;
The initial state of , has a probability of to be executed first in :
We deliberately chose not to integrate the DMTC with the FTS in a single model. This separation of concerns is motivated as follows:
We may want to integrate the approach with existing software which does not take variability into account such as MaTeLO, a MBT tool which uses DTMC as input model 111see: http://all4tec.net/index.php/en/model-based-testing/20-markov-test-logic-matelo;
The DTMC can be obtained from either users trying the software under test, extracted from logs, or from running code. These extractions methods are agnostic of the features of the system they are applied to;
Since the DTMC is built from existing software executions, it may be incomplete (as in Fig. (c)c). Some products (or subsets of their behaviors) may simply not be exercised in the usage model resulting in missing transitions in the DTMC. Keeping the FTS and usage models separate is helpful to identify and correct such issues.
The fact that a usage model is created from partial (i.e., finite) observations of the products without consideration of their features allows paths in the DTMC that are inconsistent for the SPL. For example in the usage model of Fig. (c)c, one can follow the path , , , , . This path actually mixes “pay machine” (feature not enabled) and “free machine” (feature enabled). Since the DTMC is never used alone, such situations are easy to spot using the FTS.
There are now two possible testing scenarios: product based test derivation (top-down) and family based test prioritization (bottom-up). The classification product/family based comes from .
Iii-a Product Based Test Derivation
Product based test derivation is straightforward: one selects one product (by selecting features in the FD), projects it onto the FTS, giving a TS with only the transitions of the product, prunes the DTMC to keep the following property true : . Probabilities of the removed transitions need to be distributed on siblings (since the property has to hold). Finally, we generate test cases using classical statistical testing algorithms on the DTMC [8, 13]. A similar testing process is proposed by Samih and Baudry . Product selection is made on an orthogonal variability model (OVM) and mapping between the OVM and the DTMC (implemented using MaTeLo) is provided via explicit traceability links to functional requirements. This process thus requires to perform selection of products of interest on the variability model and does not exploit probabilities and traces of the DTMC during such selection. Additionally, they assume that tests for all products of the SPL are modeled in the DTMC. This assumption may be too strong in certain cases and delay actual testing since designing the complete DTMC for a large SPL may take time. We thus explore a scenario where the DTMC drives product selection and prioritization.
Iii-B Family Based Test Prioritization
Contrary to product based test derivation, our approach (in Fig. 2) only assumes partial coverage of the SPL by the usage model. For instance, the DTMC represented in Fig. (c)c does not cover serving soda behavior because no user/tester exercised it. The key idea is to generate sequences of actions (i.e., finite traces) from the DTMC according to their probability to happen (step 1). For example, one may be interested in analyzing behaviors including serving teas for free, which will correspond to the sequence of actions , since it is highly probable (). On the contrary, one may be interested in low probability because it can mean poorly tested or irrelevant products.
The generated sequences are filtered using the FTS in order to keep only sequences that may be executed by at least one product of the SPL (step 2). The result will be a FTS’, corresponding to a pruned FTS according to the extracted sequences. Each valid sequence of actions is combined with the FTS’ to generated a set of products that may effectively execute this sequence. The probability of the sequence to be executed allows to prioritize products exercising the behavior described in the FTS’ (step 3).
Iii-B1 Trace Selection in the DTMC
The first step is to extract sequences of actions (i.e., finite traces) from the DTMC according to desired parameters provided by the tester. We define a finite trace as a finite path (i.e., a finite sequence of labels) in a TS. This may differ from the standard notion of trace but since we are in our context, infinite traces are not really useful. Formally, a finite trace corresponds to a tuple of labels : such as where denotes the existence of a path () in the TS starting from and ending in with transitions labelled as .
To perform trace selection in a DTMC , we use a classical Depth First Search (DFS) algorithm parametrized with a maximum length for finite traces and an interval specifying the minimal and maximal values for the probabilities of selected traces. Formally:
where and such as . We initially consider only finite traces starting from and ending in the initial state (assimilate to an accepting state) without passing by in between. Finite sequences starting from and ending in corresponds to a coherent execution scenario in the DTMC. With respect to partial finite traces (i.e., finite traces not ending in ), our trace definition involve a smaller state space to explore in the DTMC. This is due to the fact that the exploration of a part of the graph may be stopped, without further checks of the existence of partial finite traces, as long as the partial trace is higher than . The DFS algorithm is hence easier to implement and may better scale to large DTMCs.
Practically, this algorithm will build a n-tree where a node represents a state with the probability to reach it and the branches are the labels of the transitions taken from the state associated to the node. The root node corresponds to the initial state and has a probability of . Since we are only interested in finite traces ending in the initial state, the exploration of a branch of the tree is stopped when the depth is higher than then maximal path . This parameter is provided to the algorithm by the test engineer and is only used to avoid infinite loops during the exploration of the DTMC. Its value will depend on the size of the DTMC and should be higher than the maximal “loop free” path in the DTMC in order to get coherent finite traces.
For instance, the execution of the algorithm on the soda vending machine () example presented in Fig. (b)b gives 5 finite traces:
During the execution of the algorithm, the trace has been rejected since its probability () is is not between and .
The downside is that the algorithm will possibly enumerate all the paths in the DTMC depending on the value. This can be problematic and we plan in our future work to use symbolic executions techniques inspired by work in the probabilistic model checking area, especially automata-based representations  in order to avoid a complete state space exploration.
Iii-B2 Traces Filtering using the FTS and Building the FTS’
Generated finite traces from the DTMC may contain illegal sequences of actions (i.e., sequences of actions which can not be performed by any valid product of the SPL). The set of generated finite traces has to be filtered using the FTS such that the following property holds: for a given FTS and a usage model , a finite trace generated from represents a valid behaviour for the product line modelled by if there exists a product in such as , where represents the projection of using product and represents all the possible traces and their prefixes for a TS . The idea here is to use the FTS to detect invalid finite traces by running them on it.
Practically, we will build a second FTS’ which will represent only the behavior of the SPL appearing in the finite traces generated from the DTMC. Fig. 3 presents the algorithm used to build an from a set of (filtered during the algorithm) and a . The initial state of corresponds to the initial state of the (line 1) and in is the same as for (line 1). If a given trace is accepted by the (line 3), then the states, actions and transitions visited in when executing the trace are added to (line 4 to 6). The function on line 3 will return true if there exists at least one product in that can execute the sequence of actions in . On line 7, the function is used to enrich the function with the feature expressions of the transitions visited when executing on the . It has the following signature : and will return a new function which will for a given transition return if or otherwise.
In our example, the set of finite traces generated from step 1 contains two illegal traces: and . Those 2 traces (mixing free and not free vending machines) cannot be executed on the and will be rejected in step 2. The generated is presented in Fig. 4.
Iii-B3 Product Prioritization
At the end of step 2 in Fig. 2, we have an FTS’ and a set of finite traces in this FTS’. This set of finite traces (coming from the DTMC) covers all the valid behaviors of the FTS’. It is thus possible to order them according to their probability to happen. This probability corresponds to the the cumulated individual probabilities of the transitions fired when executing the finite trace in the DTMC. A valid finite trace corresponding to a path in the DTMC (and in the FTS’) has a probability (calculated as in step 1) to be executed. We may perform bookkeeping of .
The set of products able of executing a trace may be calculated from the FTS’ (and its associated FD). It corresponds to all the products (i.e., set of features) of the FD () that satisfy all the feature expressions associated to the transitions of . Formally, for and a FTS’ , the set of products . From a practical point of view, the set of products corresponds to the products satisfying the conjunction of the feature expressions on the path of and the FD . As may be transformed to a boolean formula where features become variables , the following formula can be calculated using a SAT solver: .
At this step, each valid finite trace is associated to the set of products that can actually execute with a probability . Product priortization may be done by classifying the finite traces according to their probability to be executed, giving -behaviorally equivalent classes of products for each finite trace . For instance, for the trace generated for our example the products will have to satisfy the feature expression and . This gives us a set of 8 products (amongst 32 possible):
All of them executing with a probability which is the lowest probable behaviour of the soda vending machine.
Iii-B4 Model Checking and Test Case Generation
Since the FTS’ represents the set of valid products capable of executing the valid finite traces generated from the DTMC. It represents a subset of the behavior of the SPL that has to be assessed in priority according to the provided bounds. The FTS’ may be verified using existing algorithms  and/or be used to generate test cases .
Iv Related Work
To the best of our knowledge, there is no approach prioritizing behaviors statistically for testing SPLs in a family-based manner. The most related proposal (outlined in section III) has been devised by Samih and Baudry . This is a product-based approach and therefore requires selecting one or more products to test at the beginning of the method. One also needs that the DTMC covers all products of the SPL, which is not our assumption here.
There have been SPL test efforts to sample products for testing such as t-wise approaches (e.g. [4, 18, 5]). More recently sampling was combined with prioritization thanks to the addition of weights on feature models and the definition of multiple objectives [7, 6]. However, these approaches do not consider SPL behavior in their analyses.
To consider behavior in an abstract way, a full-fledged MBT approach  is required. Although behavioural MBT is well established for single-system testing , a survey  shows insufficient support of SPL-based MBT. However, there have been efforts to combine sampling techniques with modeling ones (e.g. ). These approaches are also product-based, meaning that may miss opportunities for test reuse amongst sampled products . We believe that benefiting from the recent advances in behavioral modeling provided by the model checking community [23, 24, 25, 26, 27, 28, 29, 15], sound MBT approaches for SPL can be derived and interesting family-based scenarios combining verification and testing can be devised .
Our will is to apply ideas stemming from statistical testing and adapt them in an SPL context. For example, combining structural criteria with statistical testing has been discussed in [30, 31]. We do not make any assumption on the way the DTMC is obtained: via an operational profile  or by analyzing the source code or the specification . However, an uniform distribution of probabilities over the DTMC would probably be less interesting. As noted by Witthaker , in such case only the structure of traces would be considered and therefore basing their selection on their probabilities would just be a means to limit their number in a mainly random-testing approach. In such cases, structural test generation has to be employed .
In this paper, we combine concepts stemming from statistical testing with SPL sampling to extract products of interest according to the probability of their execution traces gathered in a discrete-time markov chain representing their usages. As opposed to product-based sampling approaches, we select a subset of the full SPL behavior given as Featured Transition Systems (FTS). This allows us to construct a new FTS representing only the executions of relevant products. This such pruned FTS can be analyzed all at once, to enable allow test reuse amongst products and/or to scale model-checking techniques for testing and verification activities. Future work will naturally proceed to the full implementation of the approach presented here and its validation on concrete systems. This raise a number of challenges, including inference of usage models using various techniques such as the analysis of systems logs or symbolic execution of the software product line, as well as the design of efficient algorithms for trace extraction and FTS pruning. We also would like to consider partial traces (traces that do not need to end in the initial state). Although making prioritization less scalable, they may prove useful when the discrepancies between the behavioral and usage models are too important (partial execution can cope such situations easily) or to focus on specific feature interactions.
-  K. C. Kang, S. G. Cohen, J. A. Hess, W. E. Novak, and A. Spencer Peterson, “Feature-Oriented domain analysis (FODA) feasibility study,” Software Engineering Institute, Carnegie Mellon University, Tech. Rep., 1990.
-  M. Utting and B. Legeard, Practical model-based testing: a tools approach. Morgan Kaufmann, 2007.
-  C. H. P. Kim, S. Khurshid, and D. S. Batory, “Shared execution for efficiently testing product lines,” in ISSRE ’12, 2012, pp. 221–230.
-  G. Perrouin, S. Oster, S. Sen, J. Klein, B. Baudry, and Y. L. Traon, “Pairwise testing for software product lines: comparison of two approaches,” Software Quality Journal, vol. 20, no. 3-4, pp. 605–643, 2012.
-  M. Cohen, M. Dwyer, and J. Shi, “Interaction testing of highly-configurable systems in the presence of constraints,” in ISSTA 07, 2007, pp. 129–139.
-  C. Henard, M. Papadakis, G. Perrouin, J. Klein, and Y. Le Traon, “Multi-objective test generation for software product lines,” in SPLC ’13 (to appear), 2013.
-  M. F. Johansen, Ø. Haugen, F. Fleurey, A. G. Eldegard, and T. Syversen, “Generating better partial covering arrays by modeling weights on sub-product lines,” in MoDELS ’12, 2012, pp. 269–284.
-  J. A. Whittaker and M. G. Thomason, “A markov chain model for statistical software testing,” IEEE Transactions on Software Engineering, vol. 20, no. 10, pp. 812–824, 1994.
-  A. Classen, M. Cordy, P.-Y. Schobbens, P. Heymans, A. Legay, and J.-F. Raskin, “Featured Transition Systems : Foundations for Verifying Variability-Intensive Systems and their Application to LTL Model Checking,” Software Engineering, IEEE Transactions on, vol. PP, no. 99, pp. 1–22, 2013.
-  K. Pohl, G. Böckle, and F. Van Der Linden, Software product line engineering: foundations, principles, and techniques. Springer-Verlag New York Inc, 2005.
-  P.-Y. Schobbens, P. Heymans, J.-C. Trigaux, and Y. Bontemps, “Generic semantics of feature diagrams,” Computer Networks, vol. 51, no. 2, pp. 456–479, 2007.
-  A. von Rhein, S. Apel, C. Kästner, T. Thüm, and I. Schaefer, “The PLA model: on the combination of product-line analyses,” in VaMoS ’13. Pisa, Italy: ACM Press, 2013, p. 1.
-  A. Feliachi and H. Le Guen, “Generating transition probabilities for automatic model-based test generation,” in ICST ’10. IEEE, 2010, pp. 99–102.
-  H. Samih and B. Baudry, “Relating variability modelling and modelâbased testing for software product lines testing,” in ICTSS ’12 Doctoral Symposium, 2012.
-  C. Baier and J.-P. Katoen, Principles of model checking. MIT Press, 2008.
-  K. Czarnecki and A. Wasowski, “Feature Diagrams and Logics: There and Back Again,” in SPLC 2007. IEEE, Sep. 2007, pp. 23–34.
-  X. Devroey, M. Cordy, G. Perrouin, E.-Y. Kang, P.-Y. Schobbens, P. Heymans, A. Legay, and B. Baudry, “A Vision for Behavioural Model-Driven Validation of Software Product Lines,” in ISoLA 2012, Part I, ser. LNCS 7609, Margaria T., Steffen B., and Merten M., Eds. Crete: Springer-Verlag, 2012, pp. 208–222.
-  M. B. Cohen, M. B. Dwyer, and J. Shi, “Coverage and adequacy in software product line testing,” in ROSATEA ’06, 2006, pp. 53–63.
-  J. Tretmans, “Model based testing with labelled transition systems,” in Formal methods and testing, R. M. Hierons, J. P. Bowen, and M. Harman, Eds. Berlin, Heidelberg: Springer-Verlag, 2008, pp. 1–38.
-  S. Oster, A. Wöbbeke, G. Engels, and A. Schürr, “Model-based software product lines testing survey,” in Model-Based Testing for Embedded Systems, ser. Computational Analysis, Synthesis, and Design of Dynamic Systems, J. Zander, I. Schieferdecker, and P. J. Mosterman, Eds. CRC Press, September 2011, pp. 339–382.
-  M. Lochau, S. Oster, U. Goltz, and A. Schürr, “Model-based pairwise testing for feature interaction coverage in software product line engineering,” Software Quality Journal, vol. 20, no. 3-4, pp. 567–604, 2012.
-  A. von Rhein, S. Apel, C. Kästner, T. Thüm, and I. Schaefer, “The pla model: on the combination of product-line analyses,” in VaMoS ’13, 2013, p. 14.
-  P. Asirelli, M. H. ter Beek, A. Fantechi, S. Gnesi, and F. Mazzanti, “Design and validation of variability in product lines,” in PLEASE ’11. New York, NY, USA: ACM, 2011, pp. 25–30.
-  P. Asirelli, M. H. ter Beek, S. Gnesi, and A. Fantechi, “Formal description of variability in product families,” in SPLC ’11. Washington, DC, USA: IEEE Computer Society, 2011, pp. 130–139.
-  A. Classen, P. Heymans, P. Schobbens, and A. Legay, “Symbolic model checking of software product lines,” in ICSE ’11, 2011.
-  A. Classen, P. Heymans, P. Schobbens, A. Legay, and J. Raskin, “Model checking lots of systems: efficient verification of temporal properties in software product lines,” in ICSE ’10. New York, NY, USA: ACM, 2010, pp. 335–344.
-  D. Fischbein, S. Uchitel, and V. Braberman, “A foundation for behavioural conformance in software product line architectures,” in ROSATEA ’06. New York, NY, USA: ACM, 2006, pp. 39–48.
-  A. Gruler, M. Leucker, and K. Scheidemann, “Modeling and model checking software product lines,” in Formal Methods for Open Object-Based Distributed Systems, G. Barthe and F. S. Boer, Eds., vol. 5051. Berlin, Heidelberg: Springer-Verlag, 2008, pp. 113–131.
-  K. Lauenroth, K. Pohl, and S. Toehning, “Model checking of domain artifacts in product line engineering,” in ASE ’09. Washington, DC, USA: IEEE Computer Society, 2009, pp. 269–280.
-  S.-D. Gouraud, A. Denise, M.-C. Gaudel, and B. Marre, “A new way of automating statistical testing methods,” in ASE ’01. Washington, DC, USA: IEEE Computer Society, 2001, pp. 5–.
-  P. Thévenod-Fosse and H. Waeselynck, “An investigation of statistical software testing,” Softw. Test., Verif. Reliab., vol. 1, no. 2, pp. 5–25, 1991.
-  J. D. Musa, G. Fuoco, N. Irving, D. Kropfl, and B. Juhlin, “The operational profile,” NATO ASI SERIES F COMPUTER AND SYSTEMS SCIENCES, vol. 154, pp. 333–344, 1996.