Stochastic Constraint Optimization using Propagation on Ordered Binary Decision Diagrams
Abstract
A number of problems in relational Artificial Intelligence can be viewed as Stochastic Constraint Optimization Problems (SCOPs). These are constraint optimization problems that involve objectives or constraints with a stochastic component. Building on the recently proposed language SCProbLog for modeling SCOPs, we propose a new method for solving these problems. Earlier methods used Probabilistic Logic Programming (PLP) techniques to create Ordered Binary Decision Diagrams (OBDDs), which were decomposed into smaller constraints in order to exploit existing constraint programming (CP) solvers. We argue that this approach has as drawback that a decomposed representation of an OBDD does not guarantee domain consistency during search, and hence limits the efficiency of the solver. For the specific case of monotonic distributions, we suggest an alternative method for using CP in SCOP, based on the development of a new propagator; we show that this propagator is linear in the size of the OBDD, and has the potential to be more efficient than the decomposition method, as it maintains domain consistency.
1 Introduction
Making decisions under uncertainty is an important problem in business, governance and science. Examples are found in the fields of planning and scheduling, but also occur naturally in fields like data science and bioinformatics. Many of these problems are relational in nature.
Consider for example a viral marketing problem [\citeauthoryearKempe, Kleinberg, and Tardos2003]. We are given a social network of people (vertices) that have stochastic relationships (edges). We want to rely on wordofmouth advertisement to turn acquaintances of people who buy our product into new productbuyers. How can we minimize the number of people we need to target directly in a marketing campaign, while a minimum number of people is expected to buy the product?
For another example we are given a network of stochastic proteingene interactions, with a list of (protein, gene) pairs that are of interest to a biologist [\citeauthoryearOurfali et al.2007]. We wish to reduce the network to the part that is relevant for modeling the interactions in that list. This is known as a theory compression problem [\citeauthoryearDe Raedt et al.2008]. How can we maximize the sum of interaction probabilities for the interesting pairs, while restricting the number of edges included in the extracted network?
These two problems have common features. First, both problems combine probabilistic networks and decision problems: we either decide who to target in our marketing campaign, or which interactions to select from a proteingene interaction network. Second, they both involve an objective: minimizing the number of people targeted for marketing and maximizing a sum of probabilities, respectively. Third, they both have to respect a constraint: either reaching a target with respect to the expected number of productbuyers or limiting the number of edges we select for our biologist.
The motivation for our ongoing work is that there is a need for generic tools that can be used to model and solve such problems. In our vision, these tools should combine the stateoftheart of probabilistic programming (PP) with constraint programming (CP). Probabilistic programming here provides mechanisms for calculating probabilities of paths in probabilistic networks. For making decisions, constraint programming provides wellestablished technology.
Note that the stochastic constraint in the viral marketing setting is a hard constraint on a sum of probabilities: we impose a bound on the expected number of people buying the product. This is a different setting than the soft constraints that can be expressed using maximum a posteriori (MAP) inference or maximum probability estimation (MPE).
Problems that involve these kinds of hard constraints on probabilities are the focus of the field of stochastic constraint programming (SCP) [\citeauthoryearWalsh2002], which combines probabilistic inference and constraint programming to solve Stochastic Constraint Optimization Problems (SCOPs). SCP is closely related to chance constraint programming [\citeauthoryearCharnes and Cooper1959] and probabilistic constraint programming [\citeauthoryearTarim et al.2009]. However, these tools do not provide a modeling language suitable for solving relational problems in a generic manner, and do not link to the probabilistic programming literature.
Recently we proposed a new modeling language and a new tool chain that addresses the problem of modeling and solving relational SCOPs. This language, Stochastic Constraint Probabilistic Prolog (SCProbLog) [\citeauthoryearLatour et al.2017], is based on (Decision Theoretic) ProbLog [\citeauthoryearDe Raedt, Kimmig, and Toivonen2007, \citeauthoryearVan den Broeck et al.2010], and is therefore particularly suited for modeling probabilistic paths. It extends ProbLog with syntax for specifying SCOPs that are formulated on probabilistic networks, and a tool chain for solving them. Building on ProbLog, SCProbLog has functionality for translating a Probabilistic Logic program in Boolean formulas, converting those formulas into Ordered Binary Decision Diagrams (OBDDs) for tractable weighted model counting (WMC), converting these OBDDs into Arithmetic Circuits (ACs) and decomposing these into Mixed Integer Programs (MIPs), which in turn serve as input for an offtheshelve MIP solver or CP solver that solves the SCOP.
The main contribution of this paper is a modification of the last step in this pipeline. While in earlier work, constraint optimization solvers were used as a black boxes on decomposed OBDDs, we propose to open the black box in this work. We will demonstrate that the propagation that is used in constraint satisfaction solvers, is not optimal for the constraints resulting from decomposition. Specifically, we will show that constraint propagation is not domain consistent: a search algorithm will branch over variables unnecessarily. To address this flaw, we first introduce a naïve propagation algorithm over OBDDs that is domain consistent, and whose worst case complexity is , where is the size of the OBDD and is the number of decision variables. Note that propagation is executed at every node of the search tree; any reduction of to a lower complexity affects each node of the search tree. We will then show how to calculate partial derivatives over the OBDDs [\citeauthoryearDarwiche2003], and use these derivatives to reduce the complexity of domain consistent propagation to . Here we build on earlier results for linear derivative computation on computational graphs [\citeauthoryearIri1984, \citeauthoryearRote1990] and computation graphs for the deterministic Decomposable Negation Normal Form (dDNNF) [\citeauthoryearDarwiche2001]. This is a more efficient approach for the calculation of derivatives than the one proposed in [\citeauthoryearGutmann et al.2008]. Furthermore, we will argue that our approach enables the creation of incremental constraint propagation algorithms; this allows for propagation that is more efficient than in practice. Our method assumes the stochastic constraint to have a particular monotonic property, which we discuss in more detail in section 2.
In this paper, we first give a description of how typical SCOPs can be modeled using SCProbLog, followed by a discussion on how they can be solved. In section 4 we provide a short introduction to some key concepts of CP, which we use in section 5 to introduce a proposal for an OBDDbased stochastic constraint propagator for CP systems. We conclude this work with an outlook on future research.
2 Modeling SCOPs with SCProbLog
The goal of SCProbLog [\citeauthoryearLatour et al.2017] is to provide a generic system for modeling and solving SCOPs. In this section we give an example SCOP and explain how it can be modeled using SCProbLog. Before we address that, let us first define the kinds of SCOP that we consider in this work.
Problem Definition
We consider problems that are defined on two types of variables: decision variables and mutually independent stochastic variables (denoted in this work as and , respectively). The problems involve a (stochastic) objective function and a set of (stochastic) constraints, all of which can be expressed in terms of these variables. We consider an optimization criterion or constraint to be stochastic if its definition involves stochastic variables. The aim is to find an assignment to the decision variables (also referred to as a strategy) such that the constraints are respected and the objective satisfied.
In this work we restrict our focus to variables that can take Boolean values. We can assign a value of true or false to decision variables, while the value of stochastic variables is mutually independently determined by chance, characterized by an associated probability.
We consider a selection of constraints and objective functions. In particular, we consider constraints that represent a bound on expected utilities and objective functions that maximize or minimize an expected utility, e.g.:
stochastic constraint  (1)  
stochastic optimization criterion 
where either represents the value of a decision variable , or a conditional probability . Here represents an event, and the conditional probability represents the probability of that event happening (i.e. evaluating to true), given a strategy . With we associate a reward , such that the expressions in equation 1 represent expected utilities. For simplicity we will assume in this work, but note that generalizing our approach to is trivial. Finally is a threshold for the constraint.
Intuitively, in the optimization criterion of the viral marketing problem, represents the probability of the event that person buys a product, given a marketing strategy. The marketing strategy is represented by decision variables .
In this work we impose an additional monotonicity condition on each probability : we require that for any strategy , switching the value of any decision variable from false to true, will yield a probability that is not smaller: , if differs from by one variable that is true in but false in . This condition is met in all the example problems mentioned earlier.
In this work we will consider solving stochastic constraints rather than stochastic optimization criteria. However, it is easy to use our results in optimization as well: we can solve a problem involving the optimization criterion in equation 1 by repeatedly solving a constraint satisfaction problem involving the constraint in equation 1, increasing each time we have found a solution until we find a for which there exists no solution.
An Example SCOP
Consider the network in figure 1, and suppose that information can flow through each edge with a certain probability. We can formulate a theory compression problem as described in section 1 on this network. Suppose we want to maximize the sum of probabilities that information can flow from to and from to , but we want to limit the number of edges in the network, such that there are no more than 2 (cardinality constraint). We can model this as follows:

with each edge in the network we associate a stochastic variable and a decision variable ;

with each variable we associate a probability ;

the events considered are and , which represent flow of information from to and from to ;

our objective is to find a that maximizes ;

our constraint is .
Subsequently, we need to define the probability of events and , given a strategy. Here, we use a WMC approach. We use a logical formula to represent when an event is true, given an assignment to the decision variables and a sample for the stochastic variables:
(2) 
Here, if and are true, then information can travel through edge . The logical formula represents all the ways in which information can travel from to .
The probability is then defined as the sum of the probabilities of all the (logical) models of this formula. Given strategy , one model is for instance , of which the probability is ; in principle, we sum the probabilities of all such models to obtain . Note that equation 2 has indeed a monotonic property: the more decision variables are true, the higher the probability of the event is.
To program such a formulas in a generic manner, as well as to define constraints and optimization criteria, we proposed SCProbLog [\citeauthoryearLatour et al.2017], which is also based on weighted model counting. The following program in SCProbLog would model the problem described above:
% Deterministic facts 1. node(a). node(b). node(c). node(d). % Probabilistic facts 2. 0.7::t(a,b). 0.8::t(a,d). 0.5::t(b,d). 3. 0.4::t(a,c). 0.1::t(c,d). % Decision variables 4. ?::d(a,b). ?::d(a,d). ?::d(b,d). 5. ?::d(a,c). ?::d(c,d). % Relations 6. e(X,Y) : t(X,Y), d(X,Y). e(Y,X) : t(X,Y), d(X,Y). 7. path(X,Y) : e(X,Y). 8. path(X,Y) : X \= Y, e(X,Z), path(Z,Y). % Constraints and optimization criteria 9.Here, we define the nodes in the network on line 1. Lines 2 and 3 associate the correct probability with each edge; these are the stochastic variables. We define the decision variables in lines 4 and 5. Edges are made undirected in line 6 and we give the definition of a path in lines 7 and 8. In line 9 we define the constraint: we assign a utility of 1 to each decision variable that is true. We also specify that we only allow decision variables that reflect the edges between nodes that are actually present in the network. Finally, line 10 represents the optimization criterion: we assign a utility of 1 to there being a path from to and to there being a path from to . The utilities are summed and weighted by the actual probability of there being such paths. The logical formulas and are constructed from the program by ProbLog.{ d(X,Y) => 1 : node(X), node(Y). } 2.
10.#maximize { path(a,c) => 1. path(a,d) => 1. }.
An interesting feature of SCProbLog is that any problem that does not contain negation or negative weights, represents a monotonic utility function. We restrict our attention in this work to such functions.
In the next section we briefly discuss how to compute the probabilities of such formulas efficiently and how to solve the SCOP of which they are a part.
3 Solving SCOPs using CP
We assume that the reader is familiar with ProbLog^{1}^{1}1https://dtai.cs.kuleuven.be/problog. In case of absence of that familiarity, we refer the reader to the literature, e.g. [\citeauthoryearDe Raedt, Kimmig, and Toivonen2007, \citeauthoryearFierens et al.2015]. We start this section with a short recap of why ProbLog uses knowledge compilation to obtain OBDDs; subsequently, we discuss how OBDDs can be used to solve naïvely the associated SCOP. Then we discuss the earlier proposed tool chain for solving SCOPs [\citeauthoryearLatour et al.2017] and reflect on it.
From ProbLog to OBDD
Consider equation 2, and observe that computing is complicated, as the different paths need to be enumerated, but may also overlap. Therefore, computing this probability involves a disjoint sum problem; in the general case WMC is #Pcomplete [\citeauthoryearRoth1996].
In ProbLog the tractability of this task is addressed by compiling the formulas during a preprocessing phase into a Sentential Decision Diagram (SDD) [\citeauthoryearDarwiche2011] or OBDD that allows for tractable WMC. The advantage of this method is that, once this diagram is compiled, computing has a complexity that is linear in the size of the diagram, thus reducing the complexity of the WMC task (at a cost of having to preprocess the formula). This work focuses on stochastic constraints that can be expressed by OBDDs. We assume familiarity with OBDDs, for we will only discuss a few of their characteristics here. For a more extensive overview, see for example [\citeauthoryearBenAri2012].
To see how we can compute using an OBDD, consider figure 2. It shows an OBDD that represents the probability of equation 2 evaluating to true. The weights on the outgoing arcs of nodes that represent stochastic variables (those labeled with ) correspond to the probability that that variable is true (for the solid, or hi, arcs) or false (dashed, or lo, arcs). A strategy is represented in the OBDD by adding weights of 0 and 1 to the outgoing arcs of the nodes corresponding to decision variables (those labeled with ). For example: if we choose , we put a weight of 0 on the outgoing hi arc of nodes labeled with and weight 1 on their outgoing lo arcs.
Given a strategy and arcs labeled accordingly, the OBDD can straightforwardly be mapped to an Arithmethic Circuit (AC). We can compute as follows. In a bottomup traversal, each OBDD node takes the value
(3) 
where () is the hi (lo) child of , i.e. the child connected through the solid (dashed) outgoing arc of ; for the negative leaf and for the positive leaf. Observe that .
The complexity of evaluating is thus linear in the size of the OBDD, but the number of strategies is , with the number of decision variables. The naïve way of solving a SCOP is to enumerate all possible strategies, use the OBDD to evaluate the objective function and/or constraints for each strategy, evaluate possible other constraints, and store the best feasible strategy found so far. Since the number of strategies is exponential in the number of decision variables, this naïve method does not scale well.
Solving SCOPs with the SCProbLog tool chain
Since SCOPs are constraint optimization problems, one obvious approach to improving on the naïve method is to leverage the stateoftheart CP solvers that are available. The tool chain described in [\citeauthoryearLatour et al.2017] takes the OBDD generated by ProbLog and instead of assigning weights to the outgoing arcs of the nodes in the OBDD that represent decision variables, converts the OBDD into an AC in which those weights are present as boolean decision variables.
A constraint is imposed on the value of the AC, and the then decomposed into a Mixed Integer Program (MIP); a set of smaller constraints is constructed that represent the value at each node of the OBDD according to equation 3. See figure 3 for an example of what such a MIP may look like.
As mentioned in section 1, this method has a disadvantage: during the search process, the solver cannot guarantee domain consistency on the MIP representing the constraint. We propose an alternative to this decomposition method in section 5, but will first make the notion of some basic CP concepts, including domain consistency, more concrete.
4 Introduction to Constraint Programming
Constraint programming is an area that studies the development of modeling languages and solvers for constraint satisfaction and optimization problems. Two processes form the basis of Constraint Programming solvers: search and propagation. We briefly discuss these concepts, for they are critical to understanding our contributions in this work. For a more comprehensive overview of CP, we refer the reader to the literature, e.g. Principles of Constraint Programming [\citeauthoryearApt2003]. Then we continue with a discussion of the relation between these principles and the circuit decomposition method [\citeauthoryearLatour et al.2017].
Search and Propagation
The search process is some structured method for exploring the search space of the problem. In our SCOP setting, the search space consists of all possible assignments to the (binary) decision variables, from which we need to find one that satisfies the constraints and optimizes the objective function.
The details of the search process are outside the scope of this work, but for search over binary variables the process is roughly as follows. Initially, all variables are considered to be free or unassigned; they have a domain of . Then repeatedly a free decision variable is selected, and fixed to a value (either true or false). After each such assignment, propagation is used to determine whether other variables can be fixed. Propagation is the process of updating the domains of other free variables, making them reflect the consequences of the assignments made to decision variables (the fixed variables) so far. If propagation yields a contradiction, the search backtracks over the last variable assignment; otherwise, if a free variable remains, its value is fixed and the search process continues.
The constraints of the problem guide the propagation. For example: the problem may contain a cardinality constraint that puts an upper bound of on the number of variables that can be set to true. Suppose that during search, variable is selected and fixed to true, becoming the th decision variable to be true. Now we know that 1 should be removed from the domain of each remaining free variable. This reduces the search space by making domains smaller.
During propagation two things can happen (possibly simultaneously). The first is that the domain of a free variable becomes empty. This means there is no solution given the current partial assignment to decision variables, so we must backtrack to explore a different part of the search space. Alternatively the domain size of a free variable is reduced to 1, leaving only one possible value for that variable (given the current partial assignment). Such a variable can then be fixed and removed from the set of free variables, reducing the search space by reducing the number of free variables.
There are myriad optimizations for both search and propagation, but these are outside the scope of this work. Observe that both the nature of the search and of the propagation depend on the type of variables and on the nature of the constraint. In this work we focus on developing a propagator that enforces domain consistency on OBDDs.
Domain Consistency
An important notion in propagation is that of domain consistency. We define it as follows:
Definition 1.
Let be a constraint over boolean variables . Furthermore, let be partial assignment to the variables . Then a propagator for constraint is domain consistent if for any this propagator calculates a new partial assignment satisfying these conditions: (1) extends , (2) for all variables not assigned by , both the partial assignments and can be extended to a complete assignment that satisfies the constraint .
In other words, after domain consistent propagation for a constraint, all values have been removed from all variable domains that cannot be part of a solution for that constraint.
We illustrate this notion with an example. A standard practice in CP is to call the propagator before the search starts, in order to make the initial domains consistent with the constraint, and, ideally, detect the variables that are forced to a specific value by the constraint.
Consider the OBDD in figure 3 and the associated constraint . Observe that the four possible strategies yield the following conditional probabilities, which are monotonic in the decision variables:
From this we conclude that only those strategies in which can possibly satisfy the constraint. A propagator that ensures domain consistency will detect this before the start of the search and fix to 1.
The circuit decomposition method translates this constraint on the OBDD in a CP model that is also given in figure 3. Suppose a propagator is called on this decomposed model, before the search starts. This propagator may start by trying to infer the minimum value needs to take if takes its maximum possible value. To do this, the propagator assumes for a moment that holds. Now it can infer that, in order for the constraint to be satisfied, should hold. Unfortunately, this does not tell us anything, for we already knew that the domain of is and thus does not include . Based on this, we cannot remove 0 from the domain of . Repeating this procedure to determine a bound for by assuming takes it maximum value, and from there continuing to determine bounds for and does not yield conclusive evidence to deduct that must be fixed to 1, either.
This shortcoming of the circuit decomposition method causes a lack of efficiency, since the search space is not reduced as much as possible. In the next section we introduce a propagator for OBDDs that does ensure domain consistency.
5 Approach
We intend to improve upon the existing circuit decomposition approach for solving SCOPs, by allowing an OBDDbased constraint to be added directly to a CP solver, rather than decomposed into a multitude of (linear) constraints. In order to achieve this, we need to introduce a propagator for OBDDs. As discussed in section 4, this propagator should guarantee domain consistency in the OBDD.
In this section we will first introduce a naïve approach for such a domain consistent propagator. Subsequently, we will show how to obtain a better worstcase complexity by using the idea of derivatives.
Naïve Propagator
As discussed earlier, we can calculate the quality of any strategy with an algorithm that traverses the OBDD bottomup, using equation 3.
For the creation of a domain consistent propagator, our first important observation is that our scoring function is monotonic; hence, the largest possible score is obtained by assigning the value true to all free decision variables.
The idea behind domain consistent propagation is to repeat the following process for each free decision variable :

fix variable to the value false;

fix all other free variables to the value true;

calculate the score for the resulting assignment;

if the score is lower than the desired threshold, remove the value false from the domain of variable .
By construction, this process is domain consistent.
Let be the number of free decision variables, and let be the size of the OBDD. Then the complexity of the algorithm above is : for every free variable we perform a bottomup traversal of the OBDD. Given that propagation is the most computationally intensive part of search algorithms under our constraint, it is important to obtain a better performance. We will improve this complexity to , using an approach similar to that for dDNNFs [\citeauthoryearDarwiche2001].
Overview of our Propagator
The key idea behind our improved propagator is that we calculate a derivative
(4) 
for every free decision variable . Here, represents a full assignment to all decision variables. In this assignment, every free variable is assumed to have the value true. Function represents the function defined by equation 3 on the root of the OBDD. Hence, represents the best score currently possible, in which all free variables have been given the value ; represents the assignment in which the value for variable has been switched to .
We use the derivative to remove the value false from the domains of variables that do not meet this requirement:
(5) 
Clearly, the main question becomes how to calculate for all free variables efficiently. Here, we will build on ideas introduced by Darwiche in 2003 [\citeauthoryearDarwiche2003] to build an algorithm. This algorithm adapts the ideas of Darwiche to our specific context; we will argue that this enables us to perform propagation for monotonic constraints in an incremental manner, effectively making the complexity lower than .
Calculating the Derivative
We first need to define the concept of path weight:
Definition 2.
Let be a node labeled with variable in an OBDD with variable order . We define the path weight of :
(6) 
where is a path from the root of the OBDD to , and is the set of all such paths that are valid. A path is valid if it does not include

the hi arc from a node labeled with a decision variable that is false and

the lo arc from a node labeled with a decision variable that is true or free.
In other words: we take paths that reflect the current partial assignment, and take the hi arc from free decision nodes.
In our definition of , we use for the outgoing arcs of decision nodes that can be part of a valid path.
For the outgoing arcs of stochastic nodes labeled with a stochastic variable that has weight as defined in the ProbLog program, we use:
(7) 
Note that the path weight is expressed in terms of variables only.
An example: if we were to fix and to true in figure 2, then the path weight of the node labeled with would be .
Our algorithm is based on the observation that derivatives can be calculated using the following equation:
Theorem 1.
The derivative of the OBDD with respect to a decision variable can be calculated as follows:
(8) 
where represents all nodes in the OBDD labeled with variable .
Proof.
Let be the polynomial associated with an OBDD with variable order . Let be a node labeled with and let be the positive weight of that variable. Observe that for any variable (with ) we can write as
(9) 
where the values of the the hi and lo child of are and , respectively, following equation 3. Recall that in the expression of the path weight of , does not occur. Note also that and are expressed in variables only. The derivative of this formula (with respect to ) corresponds to the claim in the theorem. ∎
We use the observation above to create an algorithm for calculating all derivatives in two stages:

a topdown pass over the complete OBDD for calculating all path weights;

a bottomup pass for calculating the values for all nodes in the complete OBDD, calculating the derivatives for each variable in the process.
The pseudo codes for these passes are given in algorithms 2 and 1, respectively.
Once these passes are completed, we can recompute the derivatives for all decision variables that are still free, and evaluate equation 5 for each of those to see if we can remove false from their domain, such that we can enforce domain consistency. The pseudo code for this is provided in algorithm 3 for clarity, but can be integrated with algorithm 2. Clearly, the overall calculation finishes in time.
Traversing Part of the OBDD
For efficient propagation, it is desirable that the complexity of the algorithm above can be reduced more; we should avoid traversing unnecessary parts of the OBDD as much as possible. Building on the ideas presented earlier, some observations allow for more efficient propagation in practice.
As we observed before, the expression for the path weight of an OBDD node labeled with variable (equation 6) only contains variables . We thus conclude that fixing a decision variable can only affect the path weights of nodes below the nodes labeled with that variable .
Moreover: because we take the hi arc both from decision nodes that are free and from those that are true, path weights below free decision nodes are not changed at all when we fix a decision node to true.
Therefore: whenever we fix a decision variable, our propagator only needs to call algorithm 1 if we fix it to false, and even then it only has to traverse the part of the diagram that is below the nodes labeled with that decision variable.
A similar argument holds for the values of the OBDD nodes. Since they are computed in a bottomup traversal of the OBDD, fixing a variable can only affect the values of the nodes labeled with that variable themselves, and those above them in the diagram. Again: only fixing a variable to false actually requires the propagator to update values at all.
We can further narrow down the parts of the diagram that need to be considered. Consider the decision variable that occurs closest to the root of the OBDD. We do not need to maintain the values for any of the nodes in the OBDD above it, as we will never need to calculate the derivative for any variable in this part of the diagram. Similarly, consider the variable closest to the leaves; we do not need to maintain path weights for its descendants either. It can be shown that by only maintaining the part of the OBDD between two borders (the active part of the OBDD), one can calculate the derivatives exactly, as well as calculate the true value of the optimization criterion without propagating towards the root.
6 Conclusion and Outlook
Many problems in AI can be seen as SCOPs. In this work we proposed a new method for solving SCOPs that are modeled using PLP techniques, specifically: SCProblog [\citeauthoryearDe Raedt, Kimmig, and Toivonen2007, \citeauthoryearLatour et al.2017]. In SCProbLog, we can convert the SCOP’s stochastic constraints into constraints on OBDDs. This work was motivated by the observation that an earlier approach was not built on domain consistent propagation. We sketched a propagator for such OBDD constraints that does enforce domain consistency in linear time.
We limited our attention to a representation of distributions in OBDDs. The advantage of this representation is that we can clearly identify parts of the diagram above and below a decision variable; we argued that this can be used to limit the active part of the diagram and to limit which type of calculation is performed on which part of the diagram.
Several details were omitted from this paper. We did not include extensive details regarding the maintenance of active parts of OBDDs or the incremental calculation of optimization criteria. Furthermore, we did not include an extension to constraints on sums of probabilities, or to constraints of the form .
Concrete next steps are the implementation of our approach, its evaluation on data, a comparison of different approaches for maintaining active parts of OBDDs, and its extension to other types of diagrams.
Acknowledgements.
This research was supported by the Netherlands Organisation for Scientific Research (NWO). Behrouz Babaki is supported by a postdoctoral scholarship from IVADO.
References
 [\citeauthoryearApt2003] Apt, K. 2003. Principles of Constraint Programming. Cambridge: Cambridge University Press.
 [\citeauthoryearBenAri2012] BenAri, M. 2012. Mathematical Logic for Computer Science. Springer Publishing Company, Incorporated, 3rd edition.
 [\citeauthoryearCharnes and Cooper1959] Charnes, A., and Cooper, W. W. 1959. ChanceConstrained Programming. Management Science 6(1):73–79.
 [\citeauthoryearDarwiche2001] Darwiche, A. 2001. On the Tractable Counting of Theory Models and its Application to Belief Revision and Truth Maintenance. Journal of Applied NonClassical Logics 11(12):11–34.
 [\citeauthoryearDarwiche2003] Darwiche, A. 2003. A differential approach to inference in Bayesian networks. Journal of the ACM 50(3):280–305.
 [\citeauthoryearDarwiche2011] Darwiche, A. 2011. SDD: A new canonical representation of propositional knowledge bases. In Proceedings of the TwentySecond International Joint Conference on Artificial Intelligence, volume 2 of IJCAI’11, 819–826.
 [\citeauthoryearDe Raedt et al.2008] De Raedt, L.; Kersting, K.; Kimmig, A.; Revoredo, K.; and Toivonen, H. 2008. Compressing probabilistic Prolog programs. Machine Learning 70(23):151–168.
 [\citeauthoryearDe Raedt, Kimmig, and Toivonen2007] De Raedt, L.; Kimmig, A.; and Toivonen, H. 2007. ProbLog: A Probabilistic Prolog and Its Application in Link Discovery. In Proceedings of the Twentieth International Joint Conference on Artifical Intelligence, volume 7 of IJCAI’07, 2462–2467.
 [\citeauthoryearFierens et al.2015] Fierens, D.; Van den Broeck, G.; Renkens, J.; Shterionov, D.; Gutmann, B.; Thon, I.; Janssens, G.; and De Raedt, L. 2015. Inference and learning in probabilistic logic programs using weighted Boolean formulas. Theory and Practice of Logic Programming 15(03):358–401.
 [\citeauthoryearGutmann et al.2008] Gutmann, B.; Kimmig, A.; Kersting, K.; and De Raedt, L. 2008. Parameter Learning in Probabilistic Databases: A Least Squares Approach. Springer, Berlin, Heidelberg. 473–488.
 [\citeauthoryearIri1984] Iri, M. 1984. Simultaneous computation of functions, partial derivatives and estimates of rounding errors: Complexity and practicality. Japan Journal of Applied Mathematics 1(2):223–252.
 [\citeauthoryearKempe, Kleinberg, and Tardos2003] Kempe, D.; Kleinberg, J.; and Tardos, É. 2003. Maximizing the Spread of Influence Through a Social Network. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’03, 137–146. New York, NY, USA: ACM.
 [\citeauthoryearLatour et al.2017] Latour, A. L. D.; Babaki, B.; Dries, A.; Kimmig, A.; Van Den Broeck, G.; and Nijssen, S. 2017. Combining Stochastic Constraint Optimization and Probabilistic Programming. In Beck, J. C., ed., Principles and Practice of Constraint Programming: 23rd International Conference, CP 2017, Melbourne, VIC, Australia, August 28 – September 1, 2017, Proceedings. Cham: Springer International Publishing. 495–511.
 [\citeauthoryearOurfali et al.2007] Ourfali, O.; Shlomi, T.; Ideker, T.; Ruppin, E.; and Sharan, R. 2007. SPINE: a framework for signalingregulatory pathway inference from causeeffect experiments. Bioinformatics 23(13):i359—i366.
 [\citeauthoryearRote1990] Rote, G. 1990. Path Problems in Graphs. Springer, Vienna. 155–189.
 [\citeauthoryearRoth1996] Roth, D. 1996. On the Hardness of Approximate Reasoning. In Artif. Intell., volume 82, 273–302. Essex, UK: Elsevier Science Publishers Ltd.
 [\citeauthoryearTarim et al.2009] Tarim, S. A.; Hnich, B.; Prestwich, S.; and Rossi, R. 2009. Finding reliable solutions: eventdriven probabilistic constraint programming. Annals of Operations Research 171(1):77–99.
 [\citeauthoryearVan den Broeck et al.2010] Van den Broeck, G.; Thon, I.; Van Otterlo, M.; and De Raedt, L. 2010. DTProbLog: A decisiontheoretic probabilistic Prolog. In Proceedings of the twentyfourth AAAI conference on artificial intelligence, 1217–1222. AAAI Press.
 [\citeauthoryearWalsh2002] Walsh, T. 2002. Stochastic constraint programming. In ECAI, volume 2, 111–115.