A Note on Posterior Probability Estimation for Classifiers

A Note on Posterior Probability Estimation for Classifiers

\nameGeorgi Nalbantov \emailgnalbantov@mdscience.eu
\nameSvetoslav Ivanov \emailsivanov@mdscience.eu
\addrDepartment of Data Science, Medical Data Science Ltd., Bulgaria
Abstract

One of the central themes in the classification task is the estimation of class posterior probability at a new point . The vast majority of classifiers output a score for , which is monotonically related to the posterior probability via an unknown relationship. There are many attempts in the literature to estimate this latter relationship. Here, we provide a way to estimate the posterior probability without resorting to using classification scores.

\ShortHeadings

A Note on Posterior Probability Estimation for ClassifiersNalbantov and Ivanov \firstpageno1

\editor

September 16, 2019

{keywords}

Posterior probability, Bayes rule, Classification

1 Introduction

The estimation of class posterior probabilities is a central topic in machine learning and statistics. Most classification algorithms do not model the posterior probability of a given class, say “” at point from dataset , , but output a score that is monotonically related to . Classifiers that do provide (indirect) estimation of are, for example, logistic regression, linear/quadratic discriminant analysis, and Naive Bayes. For classifiers that do not provide such estimation, but instead provide a “score” for the predicted value for a given class at point , the practice is to estimate the relationship between score and posterior probability. For instance, for support vector machines it is common to use the proposed method by Platt (Platt, 1999) which is based on negative log-likelihood estimation. More generally, the conformity approach has been proposed that estimates a “strangeness” value from which eventually the posterior probability is derived (Shafer and Vovk, 2008; Vovk, 2014), as well as estimation based on isotonic regression (Zadrozny and Elkan, 2002) and Venn-Abers predictions (Ayer et al., 1955; Vovk and Petej, 2014).

The purpose of this paper is to propose a method for the estimation of posterior probability, based on iterative re-building of a given classifier, where each time the ratio of negative and positive observations is varied. For shrinkage/penalization methods, we keep the (effective) total number of points in the dataset fixed, represented by the total sum of weights of the instances. This is a direct approach to the computation of class membership probability, which does not involve the usage of a score for point , which is a common approach. The paper is organized as follows: Section 2 describes the proposed approach to computing the posterior class probability for any classification algorithm; Section 3 provides results; Section 4 is devoted to discussion and proposed extensions’ and Section 5 concludes.

2 A general approach to computing posterior class probabilities

Consider the binary classification task of predicting the class of a point given dataset . The Bayes rule for computing the posterior probability for class “” at point is:

which in general for continuous is:

where denotes the value of the probability density function for the “” class at point . It follows that

(1)

In general and , the proportion of “” and “” points in the population, are estimated as the proportion of the “” and “” points in the training dataset. The estimation of and is not straightforward, however. A central idea we employ is that along the separation surface between the classes we do know that , from which it follows that

(2)

for all points along the class separation surface (by definition)111The logic applies also to implicit classifiers, which do not provide a functional form for the separation surface, as the only requirement here is to be able to detect that for a point . If we alter the (effective) amount of points from the classes so that the new proportions are computed as and , we can recompute for the same

(3)

which will not be equal to 0.5. There exist points , however, which form the separation surface in the case when the proportions of and “” and “” points in the dataset are and , respectively, for which it holds that

(4)

Effectively, we have computed in Eq.(3) the posterior probability at point for a model built on a dataset where the proportions of “” and “” points are and , respectively. If we had started the analysis with this dataset at hand (and refer to it as “new dataset”), then with respect to the model built on “new dataset”, which is in practice from now on original dataset we have been provided, the posterior probability for point (which does not lie on the separation surface with respect to “new dataset”) is found from Eq.(3) as:

(5)

where , , , and are estimated as the observed proportions in the data. The interpretation is as follows: Given the original data “new dataset”, and would like to compute with respect to the separation surface for “new dataset”, then we have to change the relative weight of “” and “” points from “new dataset” in such a way that the resulting proportions of “” and “” points, which approximate and , ensure that the separation surface pertaining to this “changed dataset” (provided by the classification algorithm at hand) goes through point . Once this is ensured, we compute as , and plug it in the Bayes formula for in Eq.(5). In general, we do have to perform a search over what values of and are needed, but they can be approximated arbitrary well. In case the classification method in question is a shrinkage/penalization method, like support vectors machines or LASSO, it would be appropriate to keep the total (effective) number of points fixed, when rebuilding to classification model with varying and , to avoid estimation bias. The steps in computing the posterior probability are summarized in Figure 1 and run as follows:

Figure 1: Steps for computing posterior probability at point for a given classifier

Step 1: Figure 1. Consider original training data (referred to as “new dataset”) with (possibly implicit) separation surface (the solid curve) pertaining to a model built using a given classification algorithm.

Step 2: Figure 1. Change the relative weight of the classes to and respectively, keeping the effective total number of points constant, until the re-computed model classifies point as , implying .

Step 3: Figure 1. Estimate for the model built in Step 1 as in Eq.(5), where , , , and are estimated directly from the data as respective proportions.

3 Experimental results on a 2D toy dataset

We illustrate the approach for estimating posterior probabilities for classifiers on a 2D two-class toy dataset. Each class in the dataset was generated by drawing 1000 random samples from a Gaussian distribution with different means and the same covariance matrix. The classifiers used in this example are linear support vector machines (SVM), logistic regression, and decision trees.

The linear SVM classifier’s parameter was fixed at 1. After the SVM was run, all points which are not support vectors were removed. In this way we ensure that the estimation of iso-probability curves is carried out on the points (support vectors), which were used to create the decision surface. The resulting iso-probability curves for levels 0.05 - 0.95 (with step 0.05) are plotted in Figure 2. The total number of points (represented as the total sum of weights) was kept fixed in the computation of these curves. We note that different values for the parameter will produce different sets of iso-probability curves, as this parameter influences the flatness of the SVM separation surface. Thus, each parameter effectively produces a different classifier, which has an intrinsic probability estimate for a given point.

Figure 2: Iso-probability curves for SVM

The results for the (linear) logistic regression classifier are shown in Figure 3. The iso-probability curves for levels 0.05 - 0.95 (with step 0.05) are plotted.

Figure 3: Iso-probability curves for logistic regression

The results for the decision tree classifier are shown in Figure 4. Prunning was applied. The iso-probability surfaces for selected levels (from left to right subfigures: 0.1, 0.7, 0.9, then 0.05, 0.55, and finally 0.25, 0.8, 0.95) are plotted. Notably, the iso-probability surfaces coinside partially at different iso levels, which might be a general property linked to the decision tree estimation process.

For SVM and logistic regression we present the relationship between (raw) scores and estimated class posterior probabilities in Figure 5. The resolution is 0.05 as this is the step used for computing the iso-probability curves. For decision trees it is not possible to present a similar figure, as the classifier does not output prediction scores, but rather predicted class labels.

4 Discussion

We have proposed a general, classifier-independent method for computing posterior class probability which only requires that the classifier can detect whether a point in question belongs to the separation surface or not (which is trivial for any classifier) and that the estimates of population proportions are computed as the observed proportions in the training data. We have addressed explicitly the binary classification problem. Because the Bayes formula holds for any class in a multi-class problem, the extension in this respect is straightforward. One choice that was made was to change the relative proportion of the two classes in the data by changing the “weights” of the points from each class (equally for points belonging to the same class), while keeping the total effective number of points fixed, rather than removing points/observations randomly from the classes, for reason of robustness of estimation. In a number of classifiers changing the weight of points is trivial and can be done directly in the classifier’s estimation/optimization (where applicable) procedure. For example, for support vector machines it is sufficient to change the so-called and parameters in order to obtain new effective number of points in a dataset.

Figure 4: Iso-probability curves for decision trees

We note that there is no explicit guarantee that the iso-probability curves produced by a classifier on a given dataset are not crossing each other, even if the classifier is linear.

Figure 5: Estimated raw scores vs. estimated class membership probability for SVM (left) and logistic regression (right)

This degeneracy should be exploited further, as it suggests that one datapoint can be associated with two different class posterior probability estimates. This degeneracy is illustrated in Figure 6. A classfication algorithm may produce two separation surfaces that cross each other: the first one is built using weights on the positive and negative instances equal to 1.5, while second one is built using corresponding weights of 1 and 2. At the intersection point the probability of each class is equal to 0.5 in both cases, which leads to two different calculations from Eq.(2) of the relative class densities at the intersection point. Clearly, this “estimation degeneracy” problem would tend to disappear for classifiers which are consistent with the Bayes rule as the number of points in the data goes large. A possible approach to deal with this degeneracy would be to impose monotonicity and apply isotonic regression for calibrating the probability estimates. Another candidate approach is to consider the estimated different posterior probabilities as upper and lower bounds of the estimation procedure.

In this note we have abstained from comparison with alternative methods for estimation of posterior probability due to the inability to provide a comprehensive comparison in just one manuscript, as the alternative approaches are numerous.

5 Conclusion

Figure 6: Illustraction of a degeneracy. A classfication algorithm produces two separation surfaces that cross each other when the relative weights of the positive and negative instances are changed.

The usual approach to solving the task of finding the class posterior probability is to choose a test point for which to estimate this probability. In contrast, we have approached the same task from a different angle: rather than focusing on , focus instead on finding iso-surfaces with constant posterior probabilities pertaining to a given classification model. The iso-surfaces are obtained by varying the relative proportion (weight) of observations belonging to different classes. This approach thus avoids any reference to observations’ “scores” from which to estimate further on posterior probabilities. We envisage further explicit extension of the approach to the multi-class classification task, as well as a possible extension to the regression task.


References

  • M. Ayer, H. D. Brunk, G. M. Ewing, W. T. Reid, and E. Silverman (1955) An empirical distribution function for sampling with incomplete information. Ann. Math. Statist. 26 (4), pp. 641–647. External Links: Document, Link Cited by: §1.
  • J. C. Platt (1999) Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In ADVANCES IN LARGE MARGIN CLASSIFIERS, pp. 61–74. Cited by: §1.
  • G. Shafer and V. Vovk (2008) A tutorial on conformal prediction. Journal of Machine Learning Research 9, pp. 371–421. Cited by: §1.
  • V. Vovk and I. Petej (2014) Venn-abers predictors. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, UAI’14, Arlington, Virginia, United States, pp. 829–838. External Links: ISBN 978-0-9749039-1-0, Link Cited by: §1.
  • V. Vovk (2014) The basic conformal prediction framework. In Conformal Prediction for Reliable Machine Learning Theory, Adaptations and Applications, pp. 1–20. Cited by: §1.
  • B. Zadrozny and C. Elkan (2002) Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’02, New York, NY, USA, pp. 694–699. External Links: ISBN 1-58113-567-X, Link, Document Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
390159
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description