Optimizing Majority Voting Based Systems Under a Resource Constraint for Multiclass Problems

# Optimizing Majority Voting Based Systems Under a Resource Constraint for Multiclass Problems

Attila Tiba Attila Tiba, András Hajdu, György Terdik, Henrietta Tomán Faculty of Informatics, University of Debrecen, Hungary, 22email: tiba.attila, hajdu.andras, terdik.gyorgy, toman.henrietta@inf.unideb.hu    András Hajdu Attila Tiba, András Hajdu, György Terdik, Henrietta Tomán Faculty of Informatics, University of Debrecen, Hungary, 22email: tiba.attila, hajdu.andras, terdik.gyorgy, toman.henrietta@inf.unideb.hu    György Terdik and Henrietta Tomán Attila Tiba, András Hajdu, György Terdik, Henrietta Tomán Faculty of Informatics, University of Debrecen, Hungary, 22email: tiba.attila, hajdu.andras, terdik.gyorgy, toman.henrietta@inf.unideb.hu
###### Abstract

Ensemble-based approaches are very effective in various fields in raising the accuracy of its individual members, when some voting rule is applied for aggregating the individual decisions. In this paper, we investigate how to find and characterize the ensembles having the highest accuracy if the total cost of the ensemble members is bounded. This question leads to Knapsack problem with non-linear and non-separable objective function in binary and multiclass classification if the majority voting is chosen for the aggregation. As the conventional solving methods cannot be applied for this task, a novel stochastic approach was introduced in the binary case where the energy function is discussed as the joint probability function of the member accuracy. We show some theoretical results with respect to the expected ensemble accuracy and its variance in the multiclass classification problem which can help us to solve the Knapsack problem.

## 1 Introduction

The ensemble creation is a rather popular and effective method in several problems to outperform the decision accuracy of individual approaches l (). To aggregate the individual decisions of the members in the ensemble, the final decision is made by applying voting rule, such as the classic or weighted majority ones.

In a binary classification problem, each member of the ensemble makes true or false decision. It means that the classifier with accuracy (, ) can be considered as Bernoulli distributed random variable , where the probability of the correct classification by is . In this particular (Bernoulli distributed) case, the expected value of the th random variable is .

In majority voting, that alternative is selected as the final decision which has majority in the ensemble (more than half of the votes). In this case, the ensemble accuracy for independent binary classifiers kb () can be calculated as:

 qbinary=n∑k=⌈n2⌉(∑I⊆{1,…,n}|I|=k∏i∈Ipi∏j∈{1,…,n}∖I(1−pj)). (1)

In Hajdu2013jspatialvoting (), the majority voting rule was extended to the spatial domain in a special object detection problem to find the optic disc (OD) in retinal images. The votes of the ensemble members (OD detectors) are given by single pixels as the centroid of the disc-like anatomical feature OD. The votes are required to fall inside a disc of a given diameter to vote together. To aggregate the outputs of individual OD detectors, the final decision is made by choosing the circle fulfilling the geometric constraint and containing the maximal number of the votes. To find the ensemble accuracy in this case, the term is introduced for the modified majority voting of the classifiers : if classifiers out of the ones give a correct vote, then the good decision is made with probability . By applying these notations, the ensemble accuracy (1) is transformed by the geometric restriction to the following formula:

 qmulti=n∑k=0pn,k(∑I⊆{1,…,n}|I|=k∏i∈Ipi∏j∈{1,…,n}∖I(1−pj)). (2)

For the given real numbers in (2), we have that .

In special case, we get back the classical majority voting scheme if the terms are chosen in the following way: , if , and , otherwise.

In the above spatial extension of the majority voting rule, the final decision is made by choosing from the candidates (circles) with respect to their cardinalities. The majority voting rule can be extended for a multiclass classification problem in a very similar way.

High accuracy for an ensemble system is a very important and natural requirement, mainly in clinical decision making. Besides the high accuracy, other performance parameters need to be discussed, as well. One of these parameters to be considered is the execution time. The ensemble creation is more resource demanding, because all the ensemble members have to be executed to make the final decision. In this paper, we solve the problem how to find the ensemble with the highest accuracy from the given possible ensemble members, with a constraint on the total execution time. These optimization problems, when the ensemble accuracy in (1) or in (2) is chosen as energy function, is very challenging, as both of them result in a non-linear, non-separable task. It means we cannot apply the classical solving methods, namely e.g. the dynamic programming, for finding the optimal solution. A Knapsack problem is formulated to handle the constraint for the total execution time. We give some theoretical results with respect to the multiclass classification problem which can help us to solve the Knapsack problem.

The rest of the paper is organized as follows. In section 2, the proper formulation of the above optimization problem as Knapsack one is given. After discussing the multiclass classification problem in contrast with the binary one in section 3, some theoretical and experimental results are enclosed for the multiclass classification problem in section 4.

## 2 The Knapsack problem with total time constraint

As first step, the classic Knapsack problem is presented, then we formulate our ensemble creation issue and discuss why finding the solution is so difficult if the energy function of the Knapsack problem is selected as in (2).

To formulate the classic Knapsack problem, let items be given, with value (, ) and weight (, ), respectively. Then let (, ) be the number of the -th item to be packed. The maximal total weight of the knapsack is (). The aim is to find the maximal value of the target function fulfilling the following conditions: .

With respect to the corresponding properties of the objective function coming from several different kinds of applications, many variations of the original Knapsack problem are considered: linear/non-linear, separable/non-separable, convex/non-convex objective functions with continuous/integer variables. Although some non-linear Knapsack problems are investigated in the literature, nonlinknap1 (), nonlinknap3 (), the vast majority of the works deal with Knapsack problems having linear or a separable convex non-linear objective function and linear constraint.

In the above presented ensemble creation motivated by the object detection problem, each possible ensemble member is an object detector. In Knapsack problem, the individual accuracy of the th detector is considered as the value , while the individual running time is the weight , where for the aggregation, a constrained majority voting is applied, that is, the ensemble accuracy given in (2) is the objective function. The problem is to find the most accurate ensemble with system accuracy from these members with limited total execution time :

 qT=max{i1,…,is}⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩s∑k=0ps,k(∑I⊆{i1,…,is}|I|=k∏i∈Ipi∏\mathclap         j∈{i1,…,is}∖I(1−pj))⎫⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪⎭ (3)

with the following conditions:

 s∑j=1tij≤T,  {i1,…,is}⊆{1,…,n}  (s=1,…,n). (4)

The main challenge in solving this optimization problem is that the target function of the constrained majority voting is non-linear, non-separable. In general, Knapsack problems with these special kind of objective functions are investigated very rarely in the related papers, or only in that case when a strict restriction on their functional structure is given (e.g., the exponential type of target function is analyzed in nonlinknap3 ()). That is, for a proper analysis we need some theoretical results for the optimization of the specific target function (2) within the Knapsack framework.

## 3 The multiclass classification problem

In binary classification, the elements of a given set are classified into two classes (predicting which class each element belongs to). As first step, a Knapsack problem is investigated for ensemble creation with binary classifiers as possible members of the ensemble, whose outputs are aggregated by applying the majority voting rule. It means that in this Knapsack problem, the objective function given in (1) is maximized when the total execution time of the selected members is bounded (see the condition in (4)).

In our proposed stochastic approach in 7899637 (), the selection of the items to the ensemble is based on the efficiency of the individual members. Instead of the usefulness values considered in the classic greedy method, the system accuracy of the ensemble containing maximal number of -th items characterizes the efficiency of the -th kind of item.

In our selection method, a discrete random variable depending on the efficiency values of the remaining items is applied in each step to determine the probability of choosing an item from the remaining set to add to the ensemble. This discrete random variable reflects that the more efficient the item is, the more probable it is selected to the ensemble in the next step.

To find and apply proper stopping criteria for this selection method, the behavior of the random variable , the joint distribution function based on the values -s in (1) is investigated. Either the distribution of the values is known, or it is fitted by Beta distribution, the knowledge on the behavior of the energy function (e.g. the expected ensemble accuracy, the probability to find more accurate ensembles) can be efficiently involved as a stopping rule in the stochastic search.

The multiclass classification can be interpreted in a similar way as the binary one, just in case the prediction of the class for each element where it belongs to is made for three or more classes multiclass (). We encounter similar problems to find the optimal solution in (3) of multiclass Knapsack problem as in the binary case, but, besides the estimation of the behavior of the energy function , the terms need to be investigated, as well. It is reasonable to assume that the more classifiers out of the ones give correct vote, the bigger probability for the good decision we get for the ensemble. Therefore, in the next section, the terms are considered as values of a function such that , where is a cumulative distribution function on .

## 4 Stochastic estimation of ensemble accuracy

We have the following theorem showing the behavior of the random variable (i.e. the expected ensemble accuracy and the variance), based on the random values of -s.

###### Theorem 4.1

Let be a random variable with , Var, and are independent and identically distributed according to . Furthermore let the energy function be defined by (2). Then for the expected ensemble accuracy we have shown that

 E(qmulti)=n∑k=0F(kn)(nk)μk(1−μ)n−k. (5)

Furthermore, if is large then

 n∑k=0F(kn)(nk)μk(1−μ)n−k∼∫10F(y)δ(μ)dy=F(μ) (6)

where is the Dirac function.

In case of large , we have the variance of the ensemble accuracy

 0≤Var(qmulti)≤F(μ)−F2(μ)=F(μ)(1−F(μ)). (7)

For practical issue, the following examples for the function are important:

Arcsine law (distributed as Beta ) with cumulative distribution function

 F(y)=2πarcsin(√y),y∈[0,1], (8)

and Generalized Arcsine law (distributed as Beta ), as if the distribution of is not known, then a Beta distribution is fitted to .

From the results of the Theorem 4.1 with respect to the expected value and the variance of the ensemble accuracy, the decision in the multiclass case for relatively large is considered to be Bernoulli variated with parameter .

While the binary classification problem is closely related to the results of the binomial distribution, then in the multiclass classification the multinomial coefficients are supposed to have very important role in finding a formula for the values of . As a first step, we simulated the multiclass classification problem for , and classes, by generating random numbers in , to decide which class is chosen. From the results of the simulations, we get approximate values for the terms . In the next step, we give a closed formula for the values , as well.

Let the multinomial coefficients be given, (, ), , and is defined as the . Then for the terms of accuracy in that case, we have the following formula,

 pn,k(d)=1dn−k∑0≤x––≤kbn−k,d(x––)αk(x––), (9)

where .

Applying this formula, we get the same results for the values of in case of , and classes as before with the simulations.

The closed formula for the values of guarantee us that besides the experimental results (e.g. simulations), further theoretical investigation and characterization of the optimal solution of the Knapsack problem in multiclass classification can be achieved as our future plan.

###### Acknowledgements.
This work is supported in part by the project EFOP-3.6.2-16-2017-00015 supported by the European Union, co-financed by the European Social Fund.
\biblstarthook

## Bibliography

• (1) Bretthauer, K.M., Shetty, B.: The nonlinear knapsack problem – algorithms and applications. European Journal of Operational Research 138, 459–472 (2002)
• (2) Hajdu, A., Hajdu, L., Jónás, A., Kovács, L., Tomán, H.: Generalizing the majority voting scheme to spatially constrained voting. IEEE Transactions on Image Processing 22, 4182–4194 (2013)
• (3) Hajdu, A., Tomán, H., Kovács, L., Hajdu, L.: Composing ensembles by a stochastic approach under execution time constraint, Proc. 23rd International Conference on Pattern Recognition (ICPR), 222–227 (2016)
• (4) Kuncheva, L.: Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience (2004)
• (5) Lam, L. and Suen, S.Y.: Application of majority voting to pattern recognition: an analysis of its behavior and performance. IEEE Transactions on Systems, Man and Cybernetics 27, 553–568 (1997)
• (6) Sharkey, T.C., Romeijn, H.E., Geunes, J.: A class of nonlinear nonseparable continuous knapsack and multiple-choice knapsack problems. Mathematical Programming 126, 69–96, (2011)
• (7) Shiraishi, Y., Fukumizu, K.: Statistical approaches to combining binary classifiers for multi-class classification, Neurocomputing 74 680–688 (2011)
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters