Does Symbolic Knowledge Prevent Adversarial Fooling?
Arguments in favor of injecting symbolic knowledge into neural architectures abound. When done right, constraining a sub-symbolic model can substantially improve its performance, sample complexity, interpretability, and can prevent it from predicting invalid configurations. Focusing on deep probabilistic (logical) graphical models – i.e., constrained joint distributions whose parameters are determined (in part) by neural nets based on low-level inputs – we draw attention to an elementary but unintended consequence of symbolic knowledge: that the resulting constraints can propagate the negative effects of adversarial examples.
Deep probabilistic (logical) graphical models (dPGMs) tie together a sub-symbolic level that processes low-level inputs with a symbolic level that handles logical and probabilistic inference, see for instance . The two levels are often implemented with neural networks and one probabilistic (logical) graphical models, respectively. Prominent examples of dPGMs include DeepProbLog  and “neural” extensions of Markov Logic [8, 10]. In this preliminary investigation, we show with a concrete toy example that fooling a single neural network with an adversarial example [11, 1] can corrupt the state of multiple output variables. We develop an intuition of this phenomenon and show that it occurs despite the model being probabilistic and regardless of whether the symbolic knowledge is factually correct.
Deep Probabilistic-Logical Models
We restrict ourselves to deep Bayesian networks (dBNs), i.e., directed dPGMs stripped of their logical component. (Our arguments do transfer to other dPGMs and deep statistical-relational models too.) These models are Bayesian networks where some conditional distributions are implemented as neural networks feeding on low-level inputs, and (roughly speaking) correspond to ground DeepProbLog models.
Let us illustrate them with a restricted version of the addition example from : the goal is to recognize the digits , appearing in two MNIST images and , knowing that the digits satisfy the constraint . Notice that the only valid predictions are , , , .
Let and . Our dBN for this problem defines a joint distribution built on the conditionals , and on . In particular, the probability of the event is implemented as a ConvNet with a softmax output layer applied to . The dBN is consistent with the symbolic knowledge in that it ensures that the joint distribution satisfies for all . This is achieved by taking an unconstrained joint distribution and constraining it:
Here is a normalization constant and the sum runs over all ’s consistent with . A joint prediction is obtained via maximum a-posteriori (MAP) inference :
If no symbolic knowledge was given, the most likely outputs would simply be , where:
Finally, we use the same ConvNet for both images, and let .
Adversarial Examples and Constraints
Consider a pair of images representing a and a , respectively, and let the ConvNet output the following conditional probabilities:
for some small , e.g., . Although the second image is rather uninformative, the unconstrained dPGM gets both digits right, with joint probability (by Eq. Deep Probabilistic-Logical Models) and so does the constrained classifier, with probability (Eq. Deep Probabilistic-Logical Models). In this case, the symbolic knowledge boosts the confidence of the model, a desirable and expected result.
Now, perturbing by shifts the conditional distribution output by the ConvNet from to and hence changes the probabilities assigned to the possible outcomes . Intuitively, a perturbation is adversarial if it is at the same time imperceptible and it forces MAP inference to output a wrong configuration. In other words, assuming that is classified correctly, is adversarial if and is “small” for some norm .
It is well known that neural networks are often susceptible to rather eye-catching adversarial perturbations that can alter their output by arbitrary amounts [11, 1]. Thus it is not too far fetched to imagine a perturbation that induces the following conditional distribution on the first digit:
Now, it can be readily verified that this perturbation forces the unconstrained dBN to predict with joint probability (which is symmetrical to the above case). Clearly this model is fooled by the adversarial image into making a mistake on , but the damage is limited to the first digit: is still predicted correctly.
However does violate the symbolic knowledge , while the constrained dBN is forced to output a valid prediction, namely the most likely configuration out of . Given the above conditional distributions and , the constrained dBN outputs with probability . This prediction is definitely consistent with , but now both digits are classified wrongly.
The toy example above illustrates the perhaps elementary but seemingly neglected fact that symbolic knowledge can propagate the negative effects of adversarial examples. This occurs because the model trades off predictive loss in exchange for satisfying a hard constraint.
While our example is decidedly toy, it is easy to see that the same phenomenon could occur in relevant sensitive applications. The phenomenon is also likely to transfer to undirected dPGMs like deep extensions of Markov Logic Networks [8, 10].
We make a couple of important remarks. First, depending on the structure of the symbolic knowledge, fooling a single neural networks in the dPGM may perturb any subset of output variables. Thus, seeking robustness of a single network is not enough and all networks must be robustified. Second, this may not be enough either: if an adversary manages to fool a robustified neural network – even by random luck – the effects of fooling will still cascade across the model. Thus the dPGM as a whole must be made robust, in the sense that all CPTs appearing in it – not only the ConvNets – must be made robust. Finally, it may be the case that access to the symbolic knowledge might help attackers in designing minimal targeted attacks that induce any target variable.
Adversarial examples in dPGMs can be understood through the lens of sensitivity analysis for directed  and undirected probabilistic graphical models ; see especially . These works show how to constrain a probabilistic graphical model to ensure that the probabilities of different queries are sufficiently far apart. These constraints could be injected into standard adversarial training routines for neural networks to encourage global robustness of the dPGM. Of course, robust training of complex dPGMs is likely to be computationally challenging. Algebraic model counting in the sensitivity semiring might prove useful in tackling this computational challenge .
- (2018) Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recognition 84. Cited by: Introduction, Adversarial Examples and Constraints.
- (2002) When do numbers really matter?. Journal of artificial intelligence research 17. Cited by: Discussion.
- (2005) Sensitivity analysis in Markov networks. In International Joint Conference on Artificial Intelligence, Vol. 19. Cited by: Discussion.
- (2006) On the robustness of most probable explanations. In Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, Cited by: Discussion.
- (2019) Neuro-Symbolic= Neural + Logical + Probabilistic. In NeSy’19@ IJCAI, the 14th International Workshop on Neural-Symbolic Learning and Reasoning, Cited by: Introduction.
- (2017) Algebraic model counting. Journal of Applied Logic 22. Cited by: Discussion.
- (2009) Probabilistic graphical models: principles and techniques. Cited by: Deep Probabilistic-Logical Models.
- (2009) Prediction of protein -residue contacts by markov logic networks with grounding-specific weights. Bioinformatics 25 (18). Cited by: Introduction, Discussion.
- (2018) DeepProbLog: Neural probabilistic logic programming. In Advances in Neural Information Processing Systems, Cited by: Introduction, Deep Probabilistic-Logical Models.
- (2019) Neural Markov Logic Networks. arXiv preprint arXiv:1905.13462. Cited by: Introduction, Discussion.
- (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: Introduction, Adversarial Examples and Constraints.