Sample-Specific Output Constraints for Neural Networks
Neural networks reach state-of-the-art performance in a variety of learning tasks. However, a lack of understanding the decision making process yields to an appearance as black box. We address this and propose ConstraintNet, a neural network with the capability to constrain the output space in each forward pass via an additional input. The prediction of ConstraintNet is proven within the specified domain. This enables ConstraintNet to exclude unintended or even hazardous outputs explicitly whereas the final prediction is still learned from data. We focus on constraints in form of convex polytopes and show the generalization to further classes of constraints. ConstraintNet can be constructed easily by modifying existing neural network architectures. We highlight that ConstraintNet is end-to-end trainable with no overhead in the forward and backward pass. For illustration purposes, we model ConstraintNet by modifying a CNN and construct constraints for facial landmark prediction tasks. Furthermore, we demonstrate the application to a follow object controller for vehicles as a safety-critical application. We submitted an approach and system for the generation of safety-critical outputs of an entity based on ConstraintNet at the German Patent and Trademark Office with the official registration mark DE10 2019 119 739.
Deep neural networks have become state-of-the-art in many competitive learning challenges. The neural network acts as a flexible function approximator in an overall learning scheme. In supervised learning, the weights of the neural network are optimized by utilizing a representative set of valid input-output pairs. Whereas neural networks solve complex learning tasks  in this way, concerns arise addressing the black box character [6, 17]: (1) In general, a neural network represents a complex non-linear mapping and it is difficult to show properties for this function from a mathematical point of view, \egverification of desired input-output relations [2, 11] or inference of confidence levels in a probabilistic framework . (2) Furthermore, the learned abstractions and processes within the neural network are usually not interpretable or explainable to an human .
With our approach, we address mainly the first concern: (1) We propose a neural network which predicts provable within a sample-specific constrained output space. ConstraintNet encodes a certain class of constraints, \ega certain type of a convex polytope, in the network architecture and enables to choose a specific constraint from this class via an additional input in each forward pass independently. In this way, ConstraintNet allows to enforce a consistent prediction with respect to a valid output domain. We assume that the partition into valid and invalid output domains is given by an external source. This could be a human expert, a rule based model or even a second neural network. (2) Secondly, we contribute to the interpretability and explainability of neural networks: A constraint over the output is interpretable and allows to describe the decision making of ConstraintNet in an interpretable way, \eglater we model output constraints for a facial landmark prediction task such that the model predicts the facial landmarks on a region which is recognized as face and locates the positions of the eyes above the nose-landmark for anatomical reasons. Therefore, the additional input encodes the output constraint and represents high level information with explainable impact on the prediction. When this input is generated by a second model, it is an intermediate variable of the total model with interpretable information.
ConstraintNet addresses safety-critical applications in particular. Neural networks tend to generalize to new data with high accuracy on average. However, there remains a risk of unforseeable and unintended behavior in rare cases. Instead of monitoring the output of the neural network with a second algorithm and intervening when safety-critical behavior is detected, we suggest to constrain the output space with ConstraintNet to safe solutions in the first place. Imagine a neural network as motion planner. In this case, sensor detections or map data constrain the output space to only collision free trajectories.
Apart from safety-critical applications, ConstraintNet can be applied to predict within a region of interest in various use cases. \Egin medical image processing, this region could be annotated by a human expert to restrict the localization of an anatomical landmark.
We demonstrate the modeling of constraints on several facial landmark prediction tasks. Furthermore, we illustrate the application to a follow object controller for vehicles as a safety-critical application. We have promising results on ongoing experiments and plan to publish in future.
2 Related work
In recent years, we observe an increasing attention in research addressing the black box character of neural networks. Apart from optimizing the data fitting and generalization performance of neural networks, in many applications it is important or even required to provide deeper information about the decision making process, \egin form of a reliable confidence level , an interpretation or even explanation [3, 18] or guarantees in form of proven mathematical properties [2, 11, 16]. Related research is known as Bayesian deep learning , interpretable and explainable AI [3, 6, 17, 18, 20, 21], adversarial attacks and defenses [2, 7, 15, 22], graph neural networks [1, 25], neural networks and prior knowledge  and verification of neural networks [2, 11, 16]. The approaches change the design of the model [1, 18, 20], modify the training procedure [2, 15] or analyze the behavior of a learned model after training [3, 11, 16, 21].
Verification and validation are procedures in software development to ensure the intended system behavior. They are an important concept of legally required development standards [23, 24] for safety-critical systems. However, it is difficult to transfer these guidelines to the development life-cycle of neural network based algorithms . It is common practice to evaluate the neural network on an independent test set. However, the expressiveness of this validation procedure is limited by the finiteness of the test set. Frequently, it is more interesting to know if a property is valid for a certain domain with possibly infinite number of samples. These properties are usually input-output relations and express \egthe exclusion of hazardous behavior , robustness properties  or consistency .
Verification approaches for neural networks  can be categorized in performing a reachability analysis , solving an optimization problem under constraints given by the neural network  or searching for violations of the considered property [9, 11]. Reluplex  is applicable to neural networks with ReLu-activation functions. It is a search based verification algorithm driven by an extended version of the simplex method. Huang \etal perform a search over a discretized space with a stepwise refinement procedure to prove local adversarial robustness. Ruan \etal reformulate the verification objective as reachability problem and utilize Lipschitz continuity of the neural network. Krishnamurthy \etal solve a Lagrangian relaxed optimization problem to find an upper bound which represents depending on its value a safety certificate. This method interacts with the training procedure and rewards higher robustness in the loss function.
With ConstraintNet, we propose a neural network with the property to predict within sample-specific output domains. The property is guaranteed by the design of the network architecture and no subsequent verification process is required.
3 Neural networks with sample-specific output constraints
This section is structured as follows: (1) First of all, we define sample-specific output constraints and ConstraintNet formally. (2) Next, we propose our approach to create the architecture of ConstraintNet. This approach requires a specific layer without learnable parameters for the considered class of constraints. (3) We model this layer for constraints in form of convex polytopes and sectors of a circle. Furthermore, we derive the layer for constraints on different output parts. (4) Finally, we propose a supervised learning algorithm for ConstraintNet.
3.1 Sample-specific output constraints
Consider a neural network with learnable parameters , input space and output space .
We introduce an output constraint as a subset of the output space and a class of output constraints as a parametrized set of them . is here a set of parameters and we call an element constraint parameter. We define ConstraintNet as a neural network with the constraint parameter as an additional input and the guarantee to predict within by design of the network architecture, \ieindependently of the learned weights :
Furthermore, we require that is (piecewise) differentiable with respect to so that backpropagation and gradient-based optimization algorithms are amenable.
3.2 Network architecture
Construction approach. We propose the approach visualized in Fig. 1 to create the architecture of ConstraintNet for a specific class of constraints . The key idea is a final layer without learnable parameters which maps the output of the previous layers on the constrained output space depending on the constraint parameter . Given a class of constraints , we require that fulfills:
When is furthermore (piecewise) differentiable with respect to we call constraint guard layer for .
The constraint guard layer has no adjustable parameters and therefore the logic is learned by the previous layers with parameters . In the ideal case, ConstraintNet predicts the same true output for a data point under different but valid constraints. This behavior requires that depends on in addition to . Without this requirement, would have the same value for fixed , and would project this for different but valid constraint parameters in general on different outputs. We transform into an appropriate representation and consider it as an additional input of , \ie. For the construction of , we propose to start with a common neural network architecture with input domain and output domain . In a next step, this neural network can be extended to add an additional input for . We propose to concatenate to the output of an intermediate layer since it is information with a higher level of abstraction.
Finally, we construct ConstraintNet for the considered class of constraints by applying the layers and the corresponding constraint guard layer subsequently:
The required property for in Eq. 2 implies that ConstraintNet predicts within the constrained output space according to Eq. 1. Furthermore, the constraint guard layer propagates gradients and backpropagation is amenable.
Construction by modifying a CNN. Fig. 2 illustrates the construction of ConstraintNet by using a convolutional neural network (CNN) for the generation of the intermediate variable , where is a CNN. As an example, a nose landmark prediction task on face images is shown. The output constraints are triangles randomly located around the nose, \ieconvex polytopes with three vertices. Such constraints can be specified by a constraint parameter consisting of the concatenated vertex coordinates. The constraint guard layer for convex polytopes is modeled in the next section and requires a three dimensional intermediate variable for triangles. The previous layers map the image data on the three dimensional intermediate variable . A CNN with output domain can be realized by adding a dense layer with output neurons and linear activations. To incorporate the dependency of on , we suggest to concatenate the output of an intermediate convolutional layer by a tensor representation of . Thereby, we extend the input of the next layer in a natural way.
3.3 Constraint guard layer for different classes of constraints
In this subsection we model the constraint guard layer for different classes of constraints. Primarily, we consider output constraints in form of convex polytopes. However, our approach is also applicable to problem-specific constraints. As an example, we construct the constraint guard layer for constraints in form of sectors of a circle. Furthermore, we model constraints for different parts of the output.
Convex polytopes. We consider convex polytopes in which can be described by the convex hull of -dimensional vertices :
We assume that the vertices are functions of the constraint parameter and define output constraints via . The constraint guard layer for a class of these constraints can easily be constructed with :
denotes the th component of the the -dimensional softmax function . The required property of in Eq. 2 follows directly from the properties and of the softmax function. However, some vertices might not be reachable exactly but upto arbitrary accuracy because . Note that is differentiable with respect to .
Sectors of a circle. Consider a sector of a circle with center position and radius . We assume that the sector is symmetric with respect to the vertical line and covers radian. Then the sector of a circle can be described by the following set of points:
With , the output constraints can be written as . It is obvious that the following constraint guard layer with an intermediate variable fulfills Eq. 2 for a class of these constraints :
Note that we use the sigmoid function to map a real number to the interval .
Constraints on output parts. We consider an output with parts ():
Each output part should be constrained independently to an output constraint of a part-specific class of constraints:
This is equivalent to constrain the overall output to with . The class of constraints for the overall output is then given by:
Assume that the constraint guard layers for the parts are given, \iefor . Then an overall constraint guard layer , \iefor , can be constructed by concatenating the constraint guard layers of the parts:
The validity of the property in Eq. 2 for with respect to follows immediately from the validity of this property for with respect to .
In supervised learning the parameters of a neural network are learned from data by utilizing a set of input-output pairs . However, ConstraintNet has an additional input which is not unique for a sample. The constraint parameter provides information in form of a region restricting the true output and therefore the constraint parameter for a sample could be any element of a set of valid constraint parameters .
We propose to sample from this set to create representative input-output pairs . This sampling procedure enables ConstraintNet to be trained with standard supervised learning algorithms for neural networks. Note that many input-output pairs can be generated from the same data point by sampling different constraint parameters . Therefore, ConstraintNet is forced to learn an invariant prediction for the same sample under different constraint parameters.
We train ConstraintNet with gradient-based optimization and sample within the training loop as it is shown in Algorithm 1. The learning objective is given by:
with being the sample loss, a regularization term and a weighting factor. The sample loss term penalizes deviations of the neural network prediction from the ground truth . We apply ConstraintNet to regression problems and use mean squared error as sample loss.
In this section, we apply ConstraintNet on a facial landmark prediction task and a follow object controller for vehicles. The output constraints for the facial landmark prediction task restrict the solution space to consistent outputs, whereas the constraints for the follow object controller help to prevent collisions and to avoid violations of legislation standards. We want to highlight that both applications are exemplary. The main goal is an illustrative demonstration for leveraging output constraints with ConstraintNet in applications.
4.1 Consistent facial landmark prediction
In our first application, we consider a facial landmark prediction for the nose , the left eye and the right eye on image data. We assume that each image pictures a face. We introduce constraints to confine the landmark predictions for nose, left eye and right eye to a bounding box which might be given by a face detector. Then, we extend these constraints and enforce relative positions between landmarks such as the eyes are above the nose. These constraints are visualized in the top row of Fig. 3. The bottom row shows constraints for the nose landmark in form of a triangle and a sector of a circle. These constraints can be realized with the constraint guard layers in Eq. 5 and Eq. 7. However, they are of less practical relevance.
Modified CNN architecture. First of all, we define the output of ConstraintNet according to:
and denote the -cooridnates with and the -coordinates with . ConstraintNet can be constructed by modifying a CNN according to Fig. 2 and Sec. 3.2. \EgResNet50  is a common CNN architecture which is used for many classification and regression tasks in computer vision . In the case of regression, the prediction is usually generated by a final dense layer with linear activations. The modifications comprise adopting the output dimension of the final dense layer with linear acitivations to match the required dimension of , adding the constraint guard layer for the considered class of constraints and inserting a representation of the constraint parameter at the stage of intermediate layers. We define as tensor and identify channels with the components of the constraint parameter , then we set all entries within a channel to a rescaled value of the corresponding constraint parameter component :
and denote the width and height of the tensor and each is a rescaling factor. We suggest to choose the factors such that is rescaled to approximately the scale of the values in the output of the layer which is extended by .
Bounding box constraints. The bounding box is specified by a left boundary , a right boundary , a top boundary and a bottom boundary . Note that the -axis starts at the top of the image and points downwards. Confining the landmark predictions to a bounding box is equivalent to constrain to the interval and to the interval independently. These intervals are one dimensional convex polytopes with the interval boundaries as vertices. Thus, we can write the output constraints for the components with the definition in Eq. 4 as:
with and . The constraint guard layers of the components are given by Eq. 5:
with and the 2-dimensional softmax function. Finally, the overall constraint guard layer can be constructed from the constraint guard layers of the components according to Eq. 13 and requires a -dimensional intermediate variable .
Enforcing relations between landmarks. We extend the bounding box constraints to model relations between landmarks. As an example, we enforce that the left eye is in fact to the left of the right eye () and that the eyes are above the nose (). These constraints can be written as three independent constraints for the output parts , , :
with constraint parameters and . Fig. 4 visualizes the constraints for the output parts: is a line segment in D, is a triangle in D and is a pyramid with vertices in D. All of these are convex polytopes and therefore the constraint guard layers for the parts are given by Eq. 5. Note that requires an intermediate variable with dimension equal to the number of vertices of the corresponding polytope. Finally, the overall constraint guard layer is given by combining the parts according to Eq. 13 and depends on an intermediate variable with dimension . Note that the introduced relations between the landmarks might be violated under rotations of the image and we consider them for demonstration purposes.
Training. For training of ConstraintNet, valid constraint parameters need to be sampled ( according to Algorithm 1. To achieve this, random bounding boxes around the face which cover the considered facial landmarks can be created. \Egin a first step, determine the smallest rectangle (parallel to the image boundaries) which covers the landmarks nose, left eye and right eye. Next, sample four integers from a given range and use them to extend each of the four rectangle boundaries independently. The sampled constraint parameter is then given by the boundaries of the generated box . In inference, the bounding boxes might be given by a face detector.
4.2 Follow object controller with safety constraints
The adaptive cruise control (ACC) is a common driver assistance system for longitudinal control and available in many vehicles nowadays. A follow object controller (FOC) is part of the ACC and gets activated when a vehicle (target-vehicle) is ahead. This situation is visualized in Fig. 5. The output of the FOC is a demanded acceleration for the ego-vehicle with the goal to keep a velocity dependent distance to the vehicle ahead (target-vehicle) under consideration of comfort and safety aspects. Common inputs for the FOC are sensor measurements such as the relative position (distance) , the relative velocity and the relative acceleration of the target vehicle \wrtthe coordinate system of the ego-vehicle and the velocity of the ego-vehicle.
Modified fully connected network. The FOC is usually modeled explicitly based on expert knowledge and classical control theory. Improving the quality of the controller leads to models with an increasing number of separately handeled cases, a higher complexity and a higher number of adjustable parameters. Finally, adjusting the model parameters gets a tedious work. This motivates the idea to implement the FOC as a neural network and learn the parameters , \egin a reinforcement learning setting. Implementing the FOC with a common neural network comes at the expense of loosing safety guarantees. However, with ConstraintNet the demanded acceleration can be confined to a safe interval (convex polytope in 1D) in each forward pass independently. A ConstraintNet for this output constraint can be created by modifying a neural network with several fully connected layers. The output should be two dimensional such that the constraint guard layer in Eq. 5 for a 1D-polytope can be applied. For the representation of the constraint parameter rescaled values of the upper and lower bound are appropriate and can be added to the input. is not inserted at an intermediate layer due to the smaller size of the network.
Constraints for safety. The output of ConstraintNet should be constrained to a safe interval . The interval is a convex polytope in 1D:
with . The constraint guard layer is given by Eq. 5. The upper bound restricts the acceleration to avoid collisions. For deriving , we assume that the target vehicle accelerates constantly with its current acceleration and the ego-vehilce continues its movement in the beginning with . is then limited by the requirement that it must be possible to break without violating maximal jerk and deceleration bounds and without undershooting a minimal distance to the target-vehicle. Thus, is the maximal acceleration which satisfies this condition. The maximal allowed deceleration for the ACC is given by a velocity dependent bound in ISO15622  and would be an appropriate choice for .
Training and reinforcement learning. In comparison to supervised learning, reinforcement learning allows to learn from experience, \ieby interacting with the environment. The quality of the interaction with the environment is measured with a reward function and the interaction self is usually implemented with a simulator. The reward function can be understood as a metric for optimal behavior and the reinforcement learning algorithm learns a policy which optimizes the reward. In our case, is the ConstraintNet for the FOC. Instead of sampling the constraint parameter from a set of valid constraint parameters, exactly one valid is computed corresponding to the safe interval . Thereby, deep reinforcement learning algorithms for continous control problems are applicable. One promising candidate is the Twin Delayed DDPG (TD3) algorithm . Note that ConstraintNet leads to a collision free training, \ietraining episodes are not interrupted.
In this paper, we have presented an approach to construct neural network architectures with the capability to constrain the space of possible predictions in each forward pass independently. We call a neural network with such an architecture ConstraintNet. The validity of the output constraints has been proven and originates from the design of the architecture. As one of our main contributions, we presented a generic modeling for constraints in form of convex polytopes. Furthermore, we demonstrated the application of ConstraintNet on a facial landmark prediction task and a follow object controller for vehicles. The first application serves for demonstration of different constraint classes whereas the second shows how output constraints allow to address functional safety. We think that the developed methodology is an important step for the application of neural networks in safety-critical functions. We have promising results in ongoing work and plan to publish experimental results in future.
- Thang D. Bui, Sujith Ravi, and Vivek Ramavajjala. Neural graph learning: Training Neural networks using graphs. In WSDM 2018 - Proceedings of the 11th ACM International Conference on Web Search and Data Mining, pages 64–71, 2018.
- Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O’Donoghue, Jonathan Uesato, and Pushmeet Kohli. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265, 2018.
- Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. University Montreal, (1341):1–13, 2009.
- Scott Fujimoto, Herke Van Hoof, and David Meger. Addressing Function Approximation Error in Actor-Critic Methods. In 35th International Conference on Machine Learning, ICML 2018, volume 4, pages 2587–2601. International Machine Learning Society (IMLS), 2018.
- Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In 33rd International Conference on Machine Learning, volume 3, pages 1651–1660, 2016.
- Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. Explaining explanations: An overview of interpretability of machine learning. In Proceedings - 2018 IEEE 5th International Conference on Data Science and Advanced Analytics, DSAA 2018, pages 80–89, 2019.
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. International Conference on Learning Representations, 2015.
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
- Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety verification of deep neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 10426 LNCS, pages 3–29, 2017.
- Anuj Karpatne, William Watkins, Jordan Read, and Vipin Kumar. Physics-guided Neural Networks (PGNN): An Application in Lake Temperature Modeling. arXiv preprint arXiv:1710.11431, 2017.
- Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10426 LNCS:97–117, 2017.
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
- Stéphane Lathuilière, Pablo Mesejo, Xavier Alameda-Pineda, and Radu Horaud. A Comprehensive Analysis of Deep Regression. arXiv preprint arXiv:1803.08450, 2018.
- Changliu Liu, Tomer Arnon, Christopher Lazarus, Clark Barrett, and Mykel J. Kochenderfer. Algorithms for Verifying Deep Neural Networks. arXiv preprint arXiv:1903.06758, 2019.
- Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks. International Conference on Learning Representations, 2017.
- Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. Reachability analysis of deep neural networks with provable guarantees. In IJCAI International Joint Conference on Artificial Intelligence, pages 2651–2659, 2018.
- Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019.
- Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. In Advances in Neural Information Processing Systems, pages 3857–3867, 2017.
- Rick Salay, Rodrigo Queiroz, and Krzysztof Czarnecki. An Analysis of ISO 26262: Using Machine Learning Safely in Automotive Software. arXiv preprint arXiv:1709.02435, 2017.
- Sascha Saralajew, Lars Holdijk, Maike Rees, and Thomas Villmann. Prototype-based Neural Network Layers: Incorporating Vector Quantization. arXiv preprint arXiv:1812.01214, 2018.
- Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv preprint arXiv:1312.6034, 2013.
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- The International Organization for Standardization. Road vehicles â Functional safety. ISO 26262, 2011.
- The International Organization for Standardization. Adaptive Cruise Control. ISO15622, 2018.
- Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph Neural Networks: A Review of Methods and Applications. arXiv preprint arXiv:1812.08434, 2018.