On the application of McDiarmid’s inequality to complex systems††thanks: This work supported by Quantification Methods, a part of NNSA’s Advanced Simulation and Computing Program, and from the Department of Energy under contract DE-AC52-06NA25396.
McDiarmid’s inequality has recently been proposed as a tool for setting margin requirements for complex systems. If is the bounded output of a complex system, depending on a vector of bounded inputs, this inequality provides a bound , such that the probability of a deviation exceeding is less than . I compare this bound with the absolute bound, based on the range of . I show that when , the effective number of independent variates, is small, and when is small, the absolute bound is smaller than , while also providing a smaller probability of exceeding the bound, i.e., zero instead of . Thus, for to be useful, the number of inputs must be large, with a small dependence on any single input, which is consistent with the usual guidance for application of concentration-of-measure results. When the number of inputs is small, or when a small number of inputs account for much of the uncertainty, the absolute bounds will provide better results. The use of absolute bounds is equivalent to the original formulation of the method of Quantification of Margins and Uncertainties (QMU).
Key words. McDiarmid inequality, concentration of measures, Quantification of Margins and Uncertainties (QMU), margin, uncertainty quantification, certification.
AMS subject classifications. 60E15, 60F10, 60G50, 60G70, 65C50.
Let be a bounded function of independent real and bounded random variates . might, for example, represent the output of a complex system depending on random inputs. We are interested in the fluctuations in , due to randomness in the inputs, where by fluctuation we mean deviation from the mean. McDiarmid’s inequality implies that for every , there is a bound , such that the probability of a fluctuation exceeding that bound is less than :
We call the McDiarmid bound. If the inputs induce comparable deviations in , then as increases, grows roughly as , as we might expect from the independence of the inputs.
There is, however, another bound for the size of these fluctuations, which is so trivial that it tends to escape notice. Assume for simplicity that the mean of is midway between its minimum and maximum values. Then the largest possible fluctuation is , where . When this bound applies, it is very effective: the probability of exceeding the bound is not just small; it is zero. This trivial bound is not useful when is large, however. Unless we make further assumptions, it scales as , so as gets large, it rapidly loses out to , which scales only as .
The purpose of this paper is to observe that when is small, the opposite is true: the McDiarmid bound is not useful, because it is larger than the trivial bound for small . Our heuristics have assumed that each has roughly the same impact on , but in general, this may not be true and we need to use , the effective number of independent variables, as defined below. can be small even when is large. Specifically, we will show that
For McDiarmid’s bound to be useful, therefore, it is necessary that
If, for example, , corresponding to a two-sided probability of exceeding the bound of 1%, then must be greater than 10.6. If , we should always use the absolute bound, because it is smaller, and ensures that fluctuations exceeding the bound happen with probability zero, rather than probability .
McDiarmid’s inequality has usually been applied in the large limit. The small properties are of interest because McDiarmid’s bound has recently been applied to the problem of deducing margin requirements in complex systems, where is not always large [6, 9, 5, 1, 8]. In particular, it has been proposed, along with more general concentration-of-measure (CoM) inequalities, as a way of formalizing the method of Quantification of Margins and Uncertainties (QMU) [3, 2, 10]. Suppose that describes a complex system, and we want to ensure that the probability of failure (POF) is or less. Suppose further that we can engineer our system to function properly for any fluctuation of size or less; is called the margin. If , then we know that the POF is less than . The problem of ensuring a suitably small POF is sometimes called the certification problem [6, 8].
The McDiarmid inequality appears to provide a general solution to the certification problem, because for any desired POF , there is a bound that guarantees that the desired POF will not be exceeded. It has never been clear, however, whether or not is small enough to be practical or useful. If the bound is so large that the margin cannot be made to exceed the bound, the bound will not be useful. One might assume that the usefulness of would depend on the details of the specific problem, and that nothing meaningful can be said in general. The usefulness of appears to be a problem for simulation studies, which is how it has been addressed up to this point.
The present result provides a simple necessary condition, depending only on and , for the McDiarmid bounds to be an improvement on the absolute bounds. It shows that the use of McDiarmid bounds is only potentially an improvement for a very specific type of problem, in which the output depends on a large number of independent inputs, and depends only weakly on any individual input. For examples of such models, see references  and . If, on the other hand, the number of inputs is small, or if a small number of the inputs contribute most of the uncertainty (which leads to a small ), then the McDiarmid bound will be larger than the trival bound, even for moderately demanding values of . We expect many complex systems to fall into this category.
Consider, for example, the case in which is a computational model of a complex system. In such models, is generally small, because the computational cost of exploring a high-dimensional space is prohibitive. For example, if and we allot only two values to each variate, we already require 1024 runs. This value of ten is smaller than the value of 10.6, cited above. But in any case, the relevant quantity is not but the effective number of variates , which accounts for the fact that some inputs have more impact on than others, and is defined in Eq. (LABEL:eq:neff) below. In the computational examples we have considered, is often only three or four.
The effective number of independent variates, , is defined in terms of the McDiarmid diameters :
where . The McDiarmid diameters are defined in Eq. (LABEL:eq:ci), but roughly, represents the maximum variation in that can be caused by variations in , with the other held fixed. If the are all the same, then . If, however, a few of the inputs dominate the uncertainty, then will also be small, regardless of how large is. For example, if of the are identical and the rest are zero, then . The best estimate of the relative contribution of to the uncertainty is , since, as we see below (Eq. (LABEL:eq:sys)), the add in quadrature to form the system diameter . Since , we have that , where is the largest . Thus, for example, if one input contributes more than a fourth of the uncertainty, , regardless of how large is. In this way, quantifies the requirement that, in order for the McDiarmid margin to be useful, no single input can contribute very much to the uncertainty.
For simplicity, we have so far assumed that the mean of is midway between its extremes, which is often approximately true. In general, the mean may not be in the center, in which case the absolute bounds on deviations from the mean can be written , where and are the bounds for positive and negative deviations, and . The corresponding bound on can be obtained from Eq. (LABEL:eq:mcd1) by rewriting the right-hand-side in terms of , where or . If deviations in both directions are of interest, then we will have for either the positive or negative deviations, so that the condition on will be even more demanding. In some cases, deviations in only one direction will be of interest and the corresponding may approach one. In such cases, requirements on can be reduced by a factor of up to four.
We present our main result in the next section, and then discuss the consequences for uncertainty quantification in the final section.
2 Main result
I now define the terms and prove the main result. Let , where each is an independent random variable. Let be some function of , which is bounded for in the range of . The McDiarmid diameters of are defined as
The supremum and infimum, here and elsewhere, are always taken over the values in the range of the . The McDiarmid system diameter is
The McDiarmid inequality  states that
The same bound applies to . The McDiarmid bound is obtained by inverting :
Using Eq. (LABEL:eq:neff), we may rewrite Eq. (LABEL:eq:meps) as follows:
Although this may seem like a step backwards, because we are separating the dependence on the into two parts, it is useful because we can show that . We thereby obtain our main result.
Let be the McDiarmid bound for probability , and let . Then
Proof. By Eq. (LABEL:eq:meps-alt), it suffices to prove that . But
The bound is an inequality because may be strictly greater than . It is easy to show that equality holds if is linear in its inputs. In general, however, when there are interactions between the variables, the bound may be weak. For example, if , and the range of each is , then , but .
Let us return to the question of how to ensure system reliability, in view of these results. In most applications, will be whatever it is, and there will be little or no freedom to change it. It may well be small. The only question is whether we can make the margins large enough to ensure a suitably small POF. For many typical systems, neither the absolute bound nor the McDiarmid bound is likely to be of much help, because they will both require unattainable margins.
As we have shown, if is small, then we are best off using the absolute bound. In practice, however, this bound may already be much too large. To use this bound, we must engineer the system so that it will work properly, even under the largest possible deviation that can possibly occur. That is, we need to engineer the system so that it cannot possibly fail. In many systems, this will not be possible.
If we do have the good fortune of a large , then we might hope to need a smaller margin, by virtue of the McDiarmid bounds. However, the concentration of measure effect kicks in very slowly, and it requires a very large in order to significantly reduce the required margin below the absolute margin. Thus, suppose that it is only possible to engineer the system so that the margin is some fraction of the possible range: . From Eq. (LABEL:eq:mcd1), we find that if , then
so that will be greater than unless
For and , for example, we would need .
The reason these results are so weak is that we have made no assumptions on the input distributions, apart from their bounds. This is quite explicit in the case of the absolute bound, where we only use . In the McDiarmid bounds, we use only the information about the maximum and minimum values of , under variations in the , and no information about how likely these extreme values are. For this reason, any bounds we obtain must be valid for the most unfavorable distributions, which for example might have half their mass sitting on the minimum, and half on the maximum. Such configurations are often known to be highly unlikely or impossible, so these analyses may not be using important information. For an approach that uses such information, see Ref. .
Finally, it is interesting to note that the use of absolute bounds is essentially QMU, in its original formulation . is the metric, the deviations are measured with respect to the design point, , is the uncertainty , and , which should be greater than one, is the confidence ratio. (For simplicity, I again assume symmetry for positive and negative deviations, but these conditions can trivially be made asymmetric.) The condition that is identical to our condition that be greater than the bound on the fluctuations. Formally, QMU provides a POF of zero. Thus, our result can be stated concisely as follows: For small and , QMU provides a POF of zero with smaller margins than those required by the McDiarmid bounds for a POF of . QMU was designed for extremely reliable systems, as a way of certifying that if the margins of key quantities were sufficiently large, the system could not possibly fail. In such systems, it may be feasible to use the absolute bound.
This work grew out of an attempt to understand simulation results of François Hemez and Christopher Stull. I thank them for bringing this interesting problem to my attention.
-  M Adams, A Lashgari, B Li, M McKerns, J Mihaly, M Ortiz, H Owhadi, A J Rosakis, M Stalzer, and T J Sullivan, Rigorous model-based uncertainty quantification with application to terminal ballistics—Part II. Systems with uncontrollable inputs and large scatter, Journal of the Mechanics and Physics of Solids, 60 (2012), pp. 1002–1019.
-  John F Ahearne, Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile, National Academy Press, ISBN-10: 0-309-12853-6, (2008), pp. 1–93.
-  D Eardley, Quantification of margins and uncertainties (QMU), JASON Study draft report, JSR-04-330, (2005).
-  Bruce T Goodwin and Raymond J Juzaitis, National Certification Methodology for the Nuclear Weapon Stockpile, UCRL-TR-223486, (2006).
-  A Kidane, A Lashgari, B Li, M McKerns, M Ortiz, H Owhadi, G Ravichandran, M Stalzer, and T J Sullivan, Rigorous model-based uncertainty quantification with application to terminal ballistics, part I: Systems with controllable inputs and small scatter, Journal of the Mechanics and Physics of Solids, 60 (2012), pp. 983–1001.
-  L J Lucas, H Owhadi, and M Ortiz, Rigorous verification, validation, uncertainty quantification and certification through concentration-of-measure inequalities, Computer Methods in Applied Mechanics and Engineering, 197 (2008), pp. 4591–4609.
-  Colin McDiarmid, On the method of bounded differences, London Math. Soc. Lecture Note Ser, 1989.
-  H Owhadi, C Scovel, T J Sullivan, M McKerns, and M Ortiz, Optimal Uncertainty Quantification, SIAM Review, 55 (2013), pp. 271–345.
-  U Topcu, LJ Lucas, H Owhadi, and M Ortiz, Rigorous uncertainty quantification without integral testing, Reliability Engineering and System Safety, (2011).
-  Timothy C Wallstrom, Quantification of margins and uncertainties: A probabilistic framework, Reliability Engineering and System Safety, 96 (2011), pp. 1053–1062.