Adaptive MCMC-Based Inference in Probabilistic Logic Programs

Adaptive MCMC-Based Inference in Probabilistic Logic Programs

Arun Nampally, C. R. Ramakrishnan
Department of Computer Science, Stony Brook University, Stony Brook, NY 11794
{anampally,cram}@cs.stonybrook.edu
Abstract

Probabilistic Logic Programming (PLP) languages enable programmers to specify systems that combine logical models with statistical knowledge. The inference problem, to determine the probability of query answers in PLP, is intractable in general, thereby motivating the need for approximate techniques. In this paper, we present a technique for approximate inference of conditional probabilities for PLP queries. It is an Adaptive Markov Chain Monte Carlo (MCMC) technique, where the distribution from which samples are drawn is modified as the Markov Chain is explored. In particular, the distribution is progressively modified to increase the likelihood that a generated sample is consistent with evidence. In our context, each sample is uniquely characterized by the outcomes of a set of random variables. Inspired by reinforcement learning, our technique propagates rewards to random variable/outcome pairs used in a sample based on whether the sample was consistent or not. The cumulative rewards of each outcome is used to derive a new “adapted distribution” for each random variable. For a sequence of samples, the distributions are progressively adapted after each sample. For a query with “Markovian evaluation structure”, we show that the adapted distribution of samples converges to the query’s conditional probability distribution. For Markovian queries, we present a modified adaptation process that can be used in adaptive MCMC as well as adaptive independent sampling. We empirically evaluate the effectiveness of the adaptive sampling methods for queries with and without Markovian evaluation structure.

1 Introduction

Probabilistic Logic Programming (PLP) covers a class of Statistical Relational Learning frameworks Getoor and Taskar (2007) aimed at combining logical and statistical reasoning. Examples of languages and systems combining logical and statistical inference include ICL Poole (1997), SLP Muggleton et al. (1996), PRISM Sato and Kameya (1997), LPAD Vennekens and Verbaeten (2003) and ProbLog De Raedt et al. (2007). In addition to standard statistical models, these languages allow reasoning over many models where logical and statistical knowledge is intricately combined, and cannot be expressed as standard statistical models.

An example problem with such a model is reachability over finite probabilistic graphs, i.e., graphs in which the presence or absence of edges is determined by a set of independent probabilistic processes. Fig. 1(a) shows an example of a probabilistic graph, where labels on the edges denote the probability with which that edge is present. For instance, edge is present with probability while edge is present with probability . The logical relationship between reachability of from , and the underlying edges in the graph cannot be expressed concisely in standard probabilistic frameworks. A PRISM program111This problem has a simpler encoding in ProbLog, and can be encoded in easily LPAD as well. We use PRISM since it simplifies the description of the techniques. encoding this problem is shown in Fig. 1(b).

As illustrated in Fig. 1(b), PRISM adds probabilistic facts of the form where is a term representing a random process called a switch, an instance of the switch, and its outcome222The instance number is ommitted if only a single instance is used in the program.. The range of a switch is specified by “value” declarations; and its distribution is specified by “set_sw” declarations. A possible world associates an outcome with each switch instance, and can be seen as a set of msw facts external to the program. In each possible world, the PRISM program, together with msw facts defining the world is a non-probabilistic program; the distribution over the possible worlds induces a distribution over the models of the PRISM program. Such a declarative distribution semantics originally defined for ICL and PRISM has been defined for other PLP languages such as LPAD and ProbLog as well.

width=1in,keepaspectratio

a

b

c

d

e

0.9

0.2

0.8

0.1

0.01

0.7

width=1in,keepaspectratio 1% Possible edges 2poss_edge(a,b). 3poss_edge(a,c). 4poss_edge(b,d). 5poss_edge(b,e). 6poss_edge(c,d). 7poss_edge(c,e). width=2in,keepaspectratio 1% Random processes 2values(r(_,_), [t,f]). 3% Distributions 4:- set_sw(r(a,b), [0.9,0.1]). 5:- set_sw(r(a,c), [0.2,0.8]). 6:- set_sw(r(b,d), [0.8,0.2]). 7:- set_sw(r(b,e), [0.01,0.99]). 8:- set_sw(r(c,d), [0.7,0.3]). 9:- set_sw(r(c,e), [0.1,0.9]). width=1.8in,keepaspectratio 7% Edge: 8edge(X,Y) :- 9     poss_edge(X,Y), 10     msw(r(X,Y),t). 11% Reachability on acyclic graphs 12reach(X,Y) :- edge(X,Y). 13reach(X,Y) :- 14    edge(X,Z), 15    reach(Z,Y).
(a) (b)
Figure 1: Example: (a) Probabilistic Graph; (b) Reachability over probabilistic graphs in PRISM

Motivation.

While PLPs can concisely express such problems, typical implementations of PLP systems have limitations. For instance, PRISM’s standard inference technique is based on enumerating explanations for answers, treating the set of explanations as pairwise mutually exclusive; in fact, due to this limitation the probability of reach(a,e) in the above example cannot be computed in PRISM. ProbLog, and subsequently, PITA Riguzzi and Swift (2011), removed these restrictions; however, exact inference in these systems does not scale beyond graphs with a few hundred vertices.

Of the several powerful sampling-based techniques developed for statistical reasoning, Markov Chain Monte Carlo (MCMC) techniques are especially suited for inference in PLPs, as shown by Cussens (2000) and Moldovan et al. (2013).

The Problem.

PLP queries for evaluating conditional probabilities are called as conditional queries and denoted as , where and are ground atomic goals, called query and evidence, respectively. A conditional query denotes a suitably normalized distribution of over all possible worlds where holds. Existing PLP systems either provide efficient techniques that apply to a restricted class of and (e.g., hindsight in PRISM) or do not treat evidence specially, leading to poor performance especially when the likelihood of evidence is low. For instance, consider the problem of determining the probability that is reachable from , given that is reachable from (i.e. , over the probabilistic graph in Fig. 1(a). Techniques such as the one proposed by Moldovan et al. (2013) will generate a world and reject it if evidence does not hold in the world. Since the probability that is reachable is , a large percentage of generated worlds will be inconsistent with the evidence, and hence unusable for computing the conditional probability.

The problem of efficiently estimating the conditional probability, even when the likelihood of evidence is low, has remained unaddressed in the context of PLP. We explore this problem in this paper, by developing an Adaptive Markov Chain Monte Carlo (AMCMC) technique. Following adaptive MCMC techniques in statistical reasoning, we progressively modify the distribution from which samples are derived; we modify the distribution so as to favor those samples that are consistent with evidence. The adaptive sampler reduces the number of generated samples needed to estimate the conditional probability to a given precision.

Approach Overview.

Our technical development starts with a MCMC technique where each state of the Markov Chain is an assignment of values to a set of switch instances. An assignment at a state corresponds to a set of possible worlds such that the truth values of evidence and query are identical in all the worlds in the set. Transitions are proposed on this chain by resampling one or more switch instances in the state and extending the resulting assignment to another state. A Metropolis-Hastings Hastings (1970) sampler is used to accept or reject this proposal, yielding the next state in the chain.

To this basic MCMC technique, we introduce adaptation as follows. For each switch instance/outcome pair, we maintain Q-value which is the likelihood that an evaluation of the evidence goal using that switch instance/outcome will succeed. Q-values are computed by maintaining a sequence of switch instances and outcomes used to evaluate the evidence goal, and propagating rewards through this sequence depending on the success or failure of the evaluation. The adapted distribution of a switch instance is proportional to the original distribution weighted by (normalized) Q-values of each outcome.

Although motivated by problems where the likelihood of evidence is low, the technique we describe is more generally applicable, even to unconditional queries.

Summary of Contributions.

  1. We define a MCMC procedure where states of the Markov chain are sets of possible worlds. This procedure is largely independent of LP evaluation itself, and hence can be used for approximate inference in probabilistic logic programs extended with tabling, constraint handling, or other features (Section 3).

  2. We define an adaptation procedure to modify the distribution from which samples are drawn. The aim of the adaption is to increase the likelihood that a sample will be consistent with given evidence (if any). We show that the adaption satisfies the “diminishing adaption” condition and hence can be used to effectively to adapt an MCMC procedure (Section 4).

  3. For a class of queries satisfying a “Markovian evaluation structure”, the adapted distribution of a random variable coincides with its marginal. For such queries, we obtain an alternative adaption procedure that can be used to obtain an adaptive independent sampler (Section 4).

We describe the results of our preliminary experiments to evaluate the MCMC procedure as well as the adaptation procedure in Section 5.

The rest of the paper begins with a brief overview of MCMC in Section 2. A more detailed description of related work and concluding remarks appears Section 6.

2 Preliminaries: Markov Chain Monte Carlo Techniques

A sequence of random variables taking on values is called a Markov chain if Andrieu et al. (2003). The values of the random variables are chosen from a fixed set called the state space of the Markov chain. When the state space is finite, the one step transition probabilities between various states are generally given as a matrix known as the transition kernel.

Given a distribution on the values of , the distribution on the values of any can be computed by multiplying the transition kernel times with the initial distribution. For certain Markov chains, irrespective of the initial distribution on , the distribution on the of values converges as increases, to its limiting distribution or stationary distribution. More formally, a stationary distribution with respect to a Markov chain with transition kernel satisfies the condition . Irreducible and aperiodic Markov chains have a unique stationary distribution Andrieu et al. (2003). In practice, a distribution is verified to be a stationary distribution, if for any two states and the following detailed balance condition holds.

Given a hard to sample target distribution, MCMC techniques solve the problem by constructing a Markov chain whose stationary distribution is the target distribution and drawing samples from it. Metropolis-Hastings (MH) is a popular MCMC-based sampling technique. Given a target distribution and an irreducible, aperiodic Markov chain with transition kernel , the MH sampler proposes a transition from state to according to , but then accepts or rejects this proposal according to the acceptance probability  Hastings (1970).

Adaptive MCMC

Given a target distribution from which samples need to be generated, MCMC algorithms such as MH construct a Markov chain whose transition kernel satisfies the detailed balance condition with respect to the target distribution. However, there are several transition kernels which satisfy this requirement. The optimal choice is not always clear. In such situations adaptive MCMC algorithms tune the transition kernel as the chain proceeds Roberts and Rosenthal (2007).

The total variation distance between two distributions and is defined as Levin et al. (2009). A Markov chain with transition kernel is said to be ergodic with respect to a target distribution if , for all  Roberts and Rosenthal (2007). In other words, the rows of the transition matrix converge to the target distribution as it is repeatedly multiplied with itself. It also means that irrespective of which state we start in the long term probability of being in a state converges to the probability of that state given by the target distribution. In general, adaptation of a transition kernel does not preserve ergodicity of the Markov chain with respect to the target distribution. However, ergodicity has been shown to be preserved under certain conditions, namely performing adaptation at regeneration times Gilks et al. (1998); Brockwell and Kadane (2005) and diminishing adaptation Roberts and Rosenthal (2007, 2009). We use the latter condition for preserving ergodicity in our adaptation scheme.

Ergodicity conditions.

Given a family of transition kernels , each having the target distribution as the stationary distribution, adaptive MCMC algorithms choose the transition kernel at time step . The update rule of is specified by the adaptive algorithm. Then ergodicity is preserved if all transition kernels have simultaneous uniform ergodicity, namely,

and the following diminishing adaptation condition is satisfied. Roberts and Rosenthal (2007).

3 MCMC for Probabilistic Logic Programs

The ability to treat a PRISM program as non-probabilistic in each world also helps us in designing sample-based query evaluation. Given a PRISM program and a ground goal , we lazily construct a set of worlds by sampling, such that succeeds or fails in all worlds in the set. The set of worlds are represented by assignments described below.

3.1 Sample-Based Query Evaluation

Assignments.

We use a structure called an assignment to keep track of known outcomes of switch instances when sampling from a PRISM program. An assignment is denoted by partial function such that is the value of instance of switch . Note that represents a set of worlds; the set of worlds corresponding to is denoted by .

Let be an assignment. We say is an assignment that is identical to at every point except at where . We define a partial order “” over assignments: if whenever . We also say that extends if . We say that two assignments and are mutually exclusive, denoted by , if there is some switch instance such that both and are defined at , but . Two assignments are compatible (denoted by “”) if they are not mutually exclusive.

Given a switch , we denote by a value randomly drawn from the domain of using the probability distribution defined for . For looking up in an assignment or extending an assignment, we define a function pick_value, defined as follows:

Note that pick_value is non-decreasing in the sense that if , then . Alternatively, we can view as defining a distribution, generating with probability if ; and where with probability if .

Sampling Evaluators.

We describe the MCMC algorithm parameterized with respect to a probabilistic query evaluation procedure called a Sampling Evaluator. This permits us to describe a generic MCMC algorithm that can be instantiated to extended probabilistic logic programming systems, including tabled and/or constraint probabilistic LPs.

A sampling evaluator, given an assignment and ground goal , (probabilistically) generates an answer to (success/failure), denoted by , an assignment , and a sequence of switch/instance/outcome triples , such that the following conditions hold:

SE1.

Consider the sequence of assignments such that , and for . Then, . Moreover, is compatible with , i.e. .

SE2.

If (similarly, failure), then is true (or false, resp.) in all worlds .

Moreover, let denote the set of all generated by the sampling evaluator. Then,

SE3.

If is a world s.t. is true (similarly, false) in , then such that and (or failure, respectively).

SE4.

Every distinct are mutually exclusive: i.e. either , or .

Properties SE2 and SE3 correspond to soundness and completeness, respectively. Property SE4 ensures that a sampling evaluator can be used to refine states in an MCMC algorithm. Not all evaluators in a probabilistic logic programming system may satisfy SE4. For instance, if an evaluator is based on constructing explanations for a goal (e.g. as in Problog) since two distinct explanations may not be mutually exclusive, it will violate SE4. Moldovan et al. (2013) overcome this problem by using the Karp-Luby algorithm, resampling switch instances to eliminate overlaps in states. In contrast, we show below that we can use Prolog-style evaluation, performed until the first derivation is found (if one exists) to construct a sampling evaluator i.e. one satisfying the above requirements including SE4.

A sampling evaluator for non-tabled PRISM.

Figure 2 shows the sampling evaluator for non-tabled PRISM programs, constructed by extending the well-known Prolog meta-interpreter. The assignment and the switch/instance/outcome sequence are maintained in the dynamic database. Observe that the evaluation follows that of Prolog as long as the selected literal is not an msw. When the selected literal is of the form , we get the value of using pick_value, and record this selection. When evaluation produces the empty clause, we know that has a derivation in all worlds consistent . When every proof finitely fails, we know that has no derivation in any world that is consistent with . Thus the evaluator in Fig. 2 has the properties described for a sampling evaluator. To see how the procedure satisfies condition SE4, assume to the contrary that the procedure has two executions that generate two assignments, and . Note that the procedure is deterministic and takes the same sequence of steps, except when an msw is encountered. If the two executions picked different outcomes, then . Otherwise, we can show by induction on the sizes of and , that the assignments are either identical or mutually exclusive.

width=6.5in,keepaspectratio 1% Given assignment is in sigma/3; 2% computed assignment is in sigma_prime/3. 3% rho/3 is sequence of switch/instance/outcomes. 4% sigma_prime/3 and rho/3 are initially empty 5:- dynamic sigma/3, sigma_prime/3, rho/3. 7% Sampling Evaluator for a ground goal G: 8sample_eval(G) :- eval(G), !. 10eval(true) :- !. 11eval((G1,G2)) :- !, eval(G1), eval(G2). 12eval((G1;G2)) :- !, eval(G1); eval(G2). 13eval(msw(S,I,V)) :- !, pick_value(S,I,V). 14eval(G) :- clause(G, B), eval(B). 15% Pick value from sigma, 16% extending it via sampling if necessary. 17pick_value(S,I,V) :- 18    (sigma_prime(S,I,U)  % if already is defined 19     -> true 20      ; (sigma(S,I,U)    % if in current assignment 21        -> assert(sigma_prime(S,I,U)), 22         % genrandom generates a random value U 23         % according to the distribution of S 24         ; genrandom(S,U), 25           assert(sigma_prime(S,I,U)) 26    ), assertz(rho(S,I,U)),  % update sequence 27    V=U.   % ensure sigma_prime and rho are 28           % updated regardless of given V

Figure 2: Sampling evaluator derived from Prolog meta-interpreter

3.2 MCMC-Based Inference of Conditional Probabilities

Initial Sample.

To perform inference using MCMC, we need (1) a way to randomly generate an initial state, and (2) a way to generate the successor state of a given state. The Markov Chain we construct has assignments as states. When evaluating probabilities of unconditional queries, we can generate an assignment corresponding to the initial state by invoking a sampling evaluator with an empty assignment. For conditional queries, we construct a Markov chain whose initial state as well as other states considered in a run are all consistent with evidence, as follows.

A randomly constructed explanation for evidence is used to generate the initial state. We do this via a Prolog-style backtracking search for a derivation of evidence, and collect all the switches and valuations used in that derivation into an initial assignment. To ensure that the initial assignment is randomly selected, we randomize the order in which clauses and switch values are selected during the backtracking search. We refer to this procedure as in the MCMC-based algorithm for inferring conditional probabilities shown in Fig. 3.

width=3.5in,keepaspectratio 1:function MCMC 2:     Input: : Program, : Query, : Evidence, 3:      : Steps to simulate 4:     Output: 5:  // Initialize 6:      := 7:      := 8:      := 9:  // Generate a chain of length 10:     for  times do 11:          = 12:          13:         if  = success then 14:               15:              if  then 16:                   := 17:                                           18:         if  = success then :=                19:     return

Figure 3: MCMC Algorithm for Inferring Conditional Probabilities

Transitions.

Consider a state in the Markov Chain corresponding to assignment . We can generate a successor state by (1) generating an alternative assignment by assigning different outcomes for some switch instances in , and (2) invoking SamplingEvaluator with to evaluate and to obtain the next state . The switch instances to be resampled can be selected in several ways. We use one of the two following schemes:

1. Single Switch: We select a single such that uniformly, and generate , effectively forgetting .

2. Multi-Switch: This resampling mode is parameterized with a probability . We generate from by forgetting with probability each for which is defined.

In Fig. 3, the resampling procedure is referred to as Resample.

Metropolis-Hastings.

The target distribution from which we want to draw samples is , which is proportional to when we consider only states where holds. The proposal distribution is the stationary distribution of a Markov Chain constructed by choosing an initial state and making transitions as described above. In order to draw samples the target distribution, we construct an MH sampler as follows. If the proposed state is inconsistent with evidence, it is rejected deterministically. If it is a consistent, it is accepted/rejected based on acceptance probability.

For single switch resampling strategy, the acceptance probability to go to state from is . For multi-switch resampling strategy, the acceptance probability is . The derivations of these probabilities is shown in Appendix.

4 Adaptive MCMC for Probabilistic Logic Programs

The rate at which samples are rejected deterministically based on the evidence (due to failure of condition in line 11 of Fig. 3) is called the rejection rate. In this section, we present a technique to progressively adapt the proposal distribution based on the samples that have been generated so far. The basic idea behind the adaptation scheme is that samples drawn in the past give information about whether or not outcomes of switch instances lead to consistent samples.

The adaptation algorithm we present here inspired by Q-learning, a reinforcement learning technique Sutton and Barto (1998). For each distinct switch instance/outcome triple used by the sampling evaluator, we maintain a real number in , called its Q-value. Intuitively the Q-value of instance of switch for outcome , denoted by represents the probability of generating a consistent sample, when the sampling evaluator chooses as the outcome when ’s value is picked using the pick_value function.

width=3in,keepaspectratio 1:function Adapt 2:     Input: , : Reward 3:     Global: : Q-values, : counts, 4:         : total Q-values. 5:      := Initialize 6:     while  do 7:         let 8:          9:          10:          11:          12:               

Figure 4: Adaptation of Q-values

Initially, all Q-values are set to , representing the belief that all outcomes are equally likely to yield consistent samples. At each iteration of MCMC, adaptation is done after evidence is evaluated, by passing rewards to each switch/instance/outcome triple in (computed in line 10 of Fig 3). We begin this processing with if , denoting an inconsistent sample, and otherwise. We work backwards through the sequence so that the last switch/instance/outcome is given a reward of 0/1, which it then modifies and passes to the switch/instance/outcome preceding it in . The Q-value of each random process/instance/outcome is computed as the average of the all rewards received by it. The algorithm for maintaining Q-values is given in Fig. 4.

The MCMC algorithm in Fig. 3 is modified for adaptive sampling as follows. First of all, function Adapt is invoked after line 16. Secondly, function used in the sampling evaluator draws values for a switch instance based on the normalized product of the original distribution and the Q-values of . Finally, the acceptance probability computation is modified to take the adapted distributions into account. Consider computing the acceptance probability to transition from state to . We can partition the assignment into three non-overlapping functions: for those ’s defined by but not by ; for those defined by both and but assigned different values; and finally, for those defined by both and and assigned same values. We can similarly partition into , and .

For single-switch resampling strategy, the acceptance probability is given by

where is the original probability and is the adapted probability. For multi-switch strategy, the acceptance probability is given by

The derivations of these probabilities are shown in the appendix.

Theorem 1.

The adaptive MCMC algorithm preserves ergodicity with respect to the target distribution.

Proof.

In order to prove the theorem we need to establish the two conditions of ergodicity mentioned in Section 2. We first prove the diminishing adaptation condition. Consider any switch/instance/value that receices a new reward of . Then the difference between successive Q-values is

We know that . Hence, as increases converges.Next, we use corollary 3 and Lemma 1 of Roberts and Rosenthal (2007) to prove simultaneous uniform ergodicity. As described in the paper, fix . Define to be the set of all pairs such that , for each . Consider the topology defined by on . It is easy to see that is compact in this topology. Now consider the distance metrics and on and respectively. Given , we define . Given this definition of we can see that for all and such that , . Therefore by corollary 3 of Roberts and Rosenthal (2007) we can say that our adaptive mcmc algorithm preserves ergodicity with respect to the target distribution.

Beyond MCMC.

It should be noted that the adapted distribution need not coincide with the conditional distribution . This precludes the use of adaptation for other sampling strategies such as independent sampling. This is not a problem for MCMC, since the adapted distribution is used as the proposal. However, for a class of program/query pairs whose sampling evaluations is “Markovian”, each switch instance’s adapted distribution converges to its marginal distribution. For such programs, a modified adaptation can be used for independent sampling as well.

Consider the class of programs and queries for which the sequence of random process/instance outcomes is such that the probability is independent the triples . These program/query pairs are said to have a Markovian Evaluation Structure. Instead of defining the Q-value to be the average of rewards, we redefine it to be the last reward. It can be shown that the rewards received by any switch instances will be monotonically decreasing during the execution of the algorithm. This allows us to perform independent sampling as well as MCMC for such programs and queries.

5 Experimental Results

The MCMC algorithm was implemented in the XSB logic programming system Swift et al. (2012). The sampling evaluator (Fig. 2) and the main control loop (Fig. 3) were implemented in Prolog. Lower level primitives managing the maintenance of assignments, resampling, computation of acceptance/rejection were implemented in C and invoked from Prolog. We evaluated the performance of this implementation on four synthetic examples: BN, Hamming, Grammar, and Reach. The experiments were run on a machine with 2.4GHz Opteron 280 processor and 4G RAM.

Bn.

This example consists of Bayesian networks whose Boolean-valued variables are arranged in the form of a grid with each node having its left and top neighbors (if any) as parents. Evidence sets the outcome of variables; and we query the outcome of one of the remaining variables. Fig 5 shows the conditional probability estimated by our algorithm, and the time taken, both plotted as functions of sample size. Observe from the figure that the time overhead for performing adaptation is small. This example clearly illustrates the benefit of adaptation.

(a) (b)
Figure 5: Bayesian Network, grid. (a) Conditional probability computed and (b) Running times as functions of sample size.

Hamming.

The Hamming code example is a PRISM program that generates a set of (4,3) Hamming codes. The evidence is a set of bits in the code with fixed values, and the query is the value of a non-evidence bit Moldovan et al. (2013). The data bits in the code were independent random variables, while the parity bits were computed from the data bits’ values. The answers computed by adaptive and non-adaptive samplers are given in Fig 6(a). The time taken by both the samplers is shown in Fig 6(b). In this example, the convergence of the adaptive MCMC is only a little better than that of the non-adaptive algorithm.

(a) (b)
Figure 6: Hamming Code, 112 bits. (a) Condition probability computed and (b) Running times as functions of sample size.

Grammar.

This example checks a property of strings over open and closed parentheses. For any randomly generated string (not necessarily balanced), we define a “maximum nesting level” as the largest number of unmatched open parenthesis in a left-to-right scan of the string. Given that a randomly generated string has balanced parenthesis, this example evaluates the conditional probability of the query that determines whether a given random string has maximum nesting level of 3 or more. For the experiment, we fixed the length of strings to 200. The answers computed by adaptive and non-adaptive sampler are shown in Fig 7(a) and the times taken are shown in Fig 7(b). We used multi-switch resampling since the entire state space is not accessible via single switch resampling. Observe that adaptive sampling converges faster, although by a small margin; but adaptive sampling is almost twice as slow per iteration.

(a) (b)
Figure 7: Nesting level in strings with balanced parenthesis. (a) Condition probability computed and (b) Running times as functions of sample size.

Reach.

The final set of examples are reachability queries in probabilistic acyclic graphs, of the form shown in Fig. 1. For the graph shown in Introduction, while computing , the non-adaptive sampler rejects of the samples, while the non-adaptive one rejects . Similar rejection rates were observed for larger randomly generated graphs as well. However, since the rejection rate of the non-adaptive sampler is low, there is no significant difference between the convergence of adaptive and non-adaptive samplers.

6 Discussion

Sampling-based approximate inference algorithms were proposed by Cussens (2000) for stochastic logic programs (SLP)s Muggleton et al. (1996). The algorithm defines an MCMC kernel on the derivations in an SLP. The technique used to propose the next state (i.e., derivation) involves backtracking one step at a time, stopping with a fixed probability given as a parameter. Once the backtracking stops, an alternate branch is sampled and resolution is continued to give next state. Our single-switch resampling technique different in that a single msw atom is chosen (uniformly at random) from the state and resampled. At a more fundamental level, our sampling technique is largely independent of the query evaluation process itself.

An MCMC technique has been proposed for Problog by Moldovan et al. (2013). It samples from explanations and makes use of a special algorithm to make the samples mutually exclusive. This incurs memory overhead (keeping track of possible worlds used for each explanation) as well as time overhead (to look back in a chain for previous uses of the same explanation. In contrast, we use Prolog-style evaluation to assure that the samples are pair-wise mutually exclusive.

Adaptive sequential rejection sampling proposed by Mansinghka et al. (2009) is an algorithm that explicitly adapts its proposal for generating samples for high dimensional graphical models. This algorithm requires the availability of a suitable factorization of the distribution with logarithmically few dependencies. Exact samples over a small set of variables are extended to exact samples over larger set of variables. The adaptation scheme described requires the complete knowledge of the factors in the distribution. Since PRISM programs represent logical as well as statistical knowledge, explicit knowledge may not even be available in our case. Consequently, our work does not rely on an explicit knowledge of factors.

We presented a MCMC technique for probabilistic logic programs that is largely independent of the manner in which queries are evaluated in the underlying logic programming systems. We defined an adaptive MCMC algorithm that adapts the probability distribution of individual switches and their instances to effectively explore the states of the Markov chain that are consistent with given evidence. We identified conditions under which a similar adaptation can be performed to enable independent samplers to draw more samples that are consistent with evidence. Preliminary experiments have shown both the potential and the limitations of this technique.

This paper focused on a generic MCMC method and adaptation, and did not consider the effect of resampling strategies. The order in which random processes are sampled may affect the convergence and hence the quality of inference. For instance, Decayed MCMC Marthi et al. (2002) samples processes based on a temporal order (resampling more recent processes more frequently). As future work, we plan extend our sampler to use an order based on programmer annotation; whether such annotations can be inferred from the program is an open problem. Finally, while sampling-based inference may be generally deployed, exact inference may still be feasible for queries with short derivations. Hence, an interesting direction of future work is to develop a hybrid inference technique that can combine exact and approximate inference based on programmer annotation. Such an inference technique can be seen as an analogue of the Rao-Blackwellized Particle Filtering method developed for Dynamic Bayesian Networks Doucet et al. (2000).

References

  • Andrieu et al. (2003) Andrieu, C., De Freitas, N., Doucet, A., and Jordan, M. 2003. An introduction to mcmc for machine learning. Machine learning 50, 1, 5–43.
  • Brockwell and Kadane (2005) Brockwell, A. E. and Kadane, J. B. 2005. Identification of regeneration times in mcmc simulation, with application to adaptive schemes. Journal of Computational and Graphical Statistics 14, 2, 436–458.
  • Cussens (2000) Cussens, J. 2000. Stochastic logic programs: Sampling, inference and applications. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann, 115–122.
  • De Raedt et al. (2007) De Raedt, L., Kimmig, A., and Toivonen, H. 2007. ProbLog: A probabilistic prolog and its application in link discovery. In Proceedings of the 20th international joint conference on Artifical intelligence. 2462–2467.
  • Doucet et al. (2000) Doucet, A., Freitas, N. d., Murphy, K. P., and Russell, S. J. 2000. Rao-Blackwellised particle filtering for dynamic bayesian networks. In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence (UAI). 176–183.
  • Getoor and Taskar (2007) Getoor, L. and Taskar, B. 2007. Introduction to statistical relational learning. MIT press.
  • Gilks et al. (1998) Gilks, W. R., Roberts, G. O., and Sahu, S. K. 1998. Adaptive markov chain monte carlo through regeneration. Journal of the American statistical association 93, 443, 1045–1054.
  • Hastings (1970) Hastings, W. 1970. Monte carlo sampling methods using markov chains and their applications. Biometrika 57, 1, 97–109.
  • Levin et al. (2009) Levin, D. A., Peres, Y., and Wilmer, E. L. 2009. Markov chains and mixing times. Amer Mathematical Society.
  • Mansinghka et al. (2009) Mansinghka, V. K., Roy, D. M., Jonas, E., and Tenenbaum, J. B. 2009. Exact and approximate sampling by systematic stochastic search. In International Conference on Artificial Intelligence and Statistics. 400–407.
  • Marthi et al. (2002) Marthi, B., Pasula, H., Russell, S., and Peres, Y. 2002. Decayed MCMC filtering. In Proceedings of the Eighteenth conference on Uncertainty in Artificial Intelligence (UAI). 319–326.
  • Moldovan et al. (2013) Moldovan, B., Thon, I., Davis, J., and Raedt, L. D. 2013. Mcmc estimation of conditional probabilities in probabilistic programming languages. In ECSQARU. 436–448.
  • Muggleton et al. (1996) Muggleton, S. et al. 1996. Stochastic logic programs. Advances in inductive logic programming 32, 254–264.
  • Poole (1997) Poole, D. 1997. The independent choice logic for modelling multiple agents under uncertainty. Artificial Intelligence 94, 1, 7–56.
  • Riguzzi and Swift (2011) Riguzzi, F. and Swift, T. 2011. The pita system: Tabling and answer subsumption for reasoning under uncertainty. Theory and Practice of Logic Programming (TPLP) 11, 4-5, 433–449.
  • Roberts and Rosenthal (2007) Roberts, G. O. and Rosenthal, J. S. 2007. Coupling and ergodicity of adaptive markov chain monte carlo algorithms. Journal of applied probability 44, 2, 458–475.
  • Roberts and Rosenthal (2009) Roberts, G. O. and Rosenthal, J. S. 2009. Examples of adaptive mcmc. Journal of Computational and Graphical Statistics 18, 2, 349–367.
  • Sato and Kameya (1997) Sato, T. and Kameya, Y. 1997. Prism: a language for symbolic-statistical modeling. In In Proceedings of the 15th International Joint Conference on Artificial Intelligence (IJCAI’97). 1330–1335.
  • Sutton and Barto (1998) Sutton, R. S. and Barto, A. G. 1998. Introduction to reinforcement learning. MIT Press.
  • Swift et al. (2012) Swift, T., Warren, D. S., et al. 2012. The XSB logic programming system, Version 3.3. Tech. rep., Computer Science, SUNY, Stony Brook. http://xsb.sourceforge.net.
  • Vennekens and Verbaeten (2003) Vennekens, J. and Verbaeten, S. 2003. A general view on probabilistic logic programming. In In Proceedings of BNAIC-03.

Appendix A Acceptance probability computation for MH sampler

Assume that the Markov chain is in state corresponding to assignment and a proposal is made to transition to a different state corresponding to assignment . The set of switch/instance pairs in can be divided into three disjoint sets: , and . The set of switch/instance pairs in can be divided similarly. The probability of an assignment is simply the product of the probabilities of the outcomes of all switch/instance pairs in that assignment. The probability of an assignment is denoted by . We denote the number of switch/instance pairs in an assignment/sub-assignment by . The acceptance probability for single switch non-adaptive resampling is

Let the probability for forgetting switch in multi switch resampling be . The acceptance probability can be computed as follows

In the case of adaptive sampling such simplifications are not possible. Let the adapted distribution be denoted by . The probability of transition from to in case of single switch resampling is . The acceptance probability is therefore computed as

Finally we can show that the acceptance probability for multi switch resampling is

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
198631
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description