Bandit Label Inference for Weakly Supervised Learning

Bandit Label Inference for
Weakly Supervised Learning

Ke Li   Jitendra Malik
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Berkeley, CA 94720
United States

The scarcity of data annotated at the desired level of granularity is a recurring issue in many applications. Significant amounts of effort have been devoted to developing weakly supervised methods tailored to each individual setting, which are often carefully designed to take advantage of the particular properties of weak supervision regimes, form of available data and prior knowledge of the task at hand. Unfortunately, it is difficult to adapt these methods to new tasks and/or forms of data, which often require different weak supervision regimes or models. We present a general-purpose method that can solve any weakly supervised learning problem irrespective of the weak supervision regime or the model. The proposed method turns any off-the-shelf strongly supervised classifier into a weakly supervised classifier and allows the user to specify any arbitrary weakly supervision regime via a loss function. We apply the method to several different weak supervision regimes and demonstrate competitive results compared to methods specifically engineered for those settings.


Bandit Label Inference for
Weakly Supervised Learning

  Ke Li   Jitendra Malik Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720 United States {,malik}

1 Introduction

The problem of weakly supervised learning naturally arises in a wide range of domains, including computer vision and natural language processing. Current weakly supervised methods are typically engineered specifically for particular weak supervision regimes, such as multiple instance learning (MIL) [7], learning with label proportions (LLP) [8] or other domain-specific settings [10], and are often carefully designed to take advantage of the form of available data and prior knowledge of the task at hand to maximize the amount of supervisory signal. These approaches usually take the form of extensions of models that are known to work well in the strongly supervised setting and necessitate the design of clever heuristics to ensure the quality of the resulting locally optimal solutions. While they represent sensible approaches, it is difficult to adapt them to new tasks and/or forms of data, which often require different weak supervision regimes or models. As a result, performance gains in one weakly supervised setting do not generally translate to performance gains in other settings.

In this paper, we propose a general-purpose method for weakly supervised learning that is able to solve any weakly supervised learning problem. Unlike existing methods, the proposed method is agnostic to the regime of weak supervision and the choice of the model. It reduces any weakly supervised learning problem to a strongly supervised learning problem and enables the use of any strongly supervised learning algorithm in the weakly supervised setting. It supports any weakly supervised regime by representing it as a user-defined loss function, and thus can serve as a drop-in replacement for any existing weakly supervised algorithm. The proposed method works by first inferring the instance-level labels and then training a classifier on the inferred labels in a fully supervised manner. Labels are inferred in an efficient manner using a combinatorial multi-armed bandit algorithm; for this reason we dub the proposed method Bandit Label Inference as Supervisory Signal, or BLISS for short. We apply the method to various weak supervision regimes and show competitive empirical results compared to methods specifically designed for those settings.

2 Related Work

There has been a rich literature of work investigating different ways of relaxing the level of supervision required to learn a model. Perhaps the most extensively studied setting is the multiple instance learning (MIL) regime, where the objective is to train a classifier from binary labelled bags of unlabelled instances, with positive bags known to contain at least one positive instance and negative bags containing no positive instances. Various algorithms have been developed for the MIL regime, including Axis-Parallel Rectangle (APR) [7], Diverse Density [13] and EM-DD [22]. Many other MIL algorithms take the form of extensions of strongly supervised methods, such as -nearest neighbour methods Citation-kNN and Bayesian-kNN [17], support vector machine methods mi-SVM and MI-SVM [1] and neural network algorithm BP-MIP [21]. The MIL setting has been found to arise naturally in a wide range tasks in various domains, such as drug activity prediction [7], stock selection [13], text categorization [1], object detection [16] and computer-aided medical diagnosis [9]. Various extensions of the MIL regime have also been explored, such as settings where the bag label depends on all instances in the bag [19] or where the bag label is positive only when multiple conditions are simultaneously satisfied [18].

One other notable setting that has been studied is the learning with label proportions (LLP) regime, where the bag label is the proportion of positive instances in the bag. A variety of methods have been developed, such as approaches based on graphical model formulations [8], -means based methods [5], support vector machines [15, 20], and estimations of the mean operator [14]. The LLP regime has found applications in fraud detection [15] and video event detection [11].

3 Bandit Label Inference as Supervisory Signal

A weakly supervised learning problem arises when some or all labels of individual training instances are unknown. So, if the labels of the training instances can be inferred, this problem is reduced to the standard strongly supervised learning setting. All weakly supervised regimes fit nicely into this framework, and different ways of leveraging weak labels can be represented as different loss functions on the inferred instance-level labels given the weak labels.

The problem of inferring instance-level labels is challenging, as the number of possible labellings scales exponentially in the number of instances. The labels of different instances are highly dependent, so it is not possible to optimize over the labels of each instance independently. We tackle this problem by formulating the label inference problem as a combinatorial multi-armed bandit (CMAB) problem and leveraging a CMAB algorithm to explore the labelling space efficiently. Our formulation treats the strongly supervised classifier and the loss function induced by the weak supervision regime as a black box, enabling the proposed algorithm to work with any combination of classifier and weak supervision regime.

3.1 Background

The multi-armed bandit (MAB) is a general framework that models sequential decision-making under uncertainty. In its most basic form, there is a finite number of arms, each of which generates a reward from an unknown probability distribution when pulled. Only one arm can be pulled at a time and the objective is to choose the arm to pull in each round in order to maximize the cumulative expected reward one receives over multiple rounds. This is often equivalently formulated as minimization of cumulative pseudo-regret, which is defined as the difference in cumulative expected reward between the optimal and the actual arm selection strategy. Many arm selection strategies have been proposed; one classic strategy is the Upper Confidence Bound (UCB) strategy [2], which computes a probabilistic upper bound for the true mean reward of each arm based on the sample mean and picks the arm with the highest upper bound. The probability at which the upper bound holds decreases over time, leading to a gradual transition from exploration (trying arms that have not been pulled much) to exploitation (pulling the arms that are known to give high rewards). When the reward distribution associated with each arm is in the exponential family, it has been shown that a generalization of the original UCB strategy, KL-UCB [4], is optimal, in the sense that the leading term of the upper bound on cumulative pseudo-regret matches the lower bound [12]. By considering the special case of Bernoulli-distributed rewards, an upper bound on cumulative pseudo-regret can be obtained for any reward distribution with bounded support on , but it may not be optimal unless the rewards are Bernoulli. We refer interested readers to [3] for a survey on the topic.

The combinatorial multi-armed bandit (CMAB) extends the classical MAB setting by allowing a set of (simple) arms, known as a super arm, to be pulled simultaneously. The super arms may have some underlying combinatorial structure; so, only some combinations of simple arms may be considered as valid super arms. The rewards of different simple arms in a super arm may be dependent, and the reward of a super arm can be thought of as the sum of the rewards of its constituent simple arms (though this can be generalized). Now, the objective is to choose a super arm to pull in each round to maximize the cumulative expected reward of super arms. Chen et al. [6] proposed an arm selection strategy for this setting called the Combinatorial Upper Confidence Bound (CUCB), which maintains a probabilistic upper bound for the true mean reward of each simple arm and picks the super arm with the highest sum of upper bounds of constituent simple arms.

3.2 Formulation

In our formulation, we associate each instance with a set of simple arms, each of which corresponds to a possible label the instance can take. Assuming each instance only has one ground truth label, a super arm is valid if it consists of exactly one simple arm from each set associated with an instance. So, each super arm corresponds to a possible labelling of the instances. The reward of a super arm can be viewed as the negative loss on the labelling associated with the super arm. The form of the loss function depends on the weak supervision regime and task-specific prior knowledge, which impose hard and/or soft constraints on the space of the possible labellings; if a constraint is violated, loss should be high, or equivalently, reward should be low. This only needs to hold in expectation, since rewards can be stochastic. For example, in the MIL regime, we use a reward that penalizes labellings where no positive instances appear in a positive bag and a positive instance appears in a negative bag. The rewards of the simple arms are derived from the reward of the super arm, and are typically the local versions of the global reward of the super arm that serve as proxies for the marginal effect of assigning a particular label to an instance.

By plugging in different reward functions, we can obtain algorithms that solve the label inference problem under different weak supervision regimes. Note that there are very few restrictions on the form of the reward function; in fact, almost any bounded loss function111Refer to [6] for the precise conditions on the reward function required by the CUCB algorithm. can be turned into a valid reward function.

3.3 Algorithm

We present the proposed label inference algorithm in detail below, which is based on the CUCB algorithm applied to our formulation. Conceptually, in each iteration of the algorithm, we first generate a candidate labelling of the training instances, which corresponds to pulling a super arm in the CMAB problem. Then, given the labelling, we train a classifier in a fully supervised manner and run the classifier on a weakly labelled held-out set. We obtain the predicted labels for each instance in the held-out set and compute the rewards, which is a function of the number of violations of the weak supervision constraints on the held-out set. Using this information, we can update the likelihood of each label, which corresponds to updating the empirical mean rewards of simple arms in the CMAB problem; the likelihood of the labels is then used to generate the candidate labelling in the subsequent iteration. The algorithm terminates after a fixed number of iterations and outputs the labels with the highest likelihood/empirical means. Please refer to Algorithm 1 for a precise statement of the algorithm.

Set of possible labels each instance can take, reward for assigning label to instance and strongly supervised classifier
function InferLabels()
      and for all instances and all possible labels that can take
      is the number of times the simple arm is pulled and is the cumulative reward over all pulls
     while  do
         Randomly pick a label assignment to all instances such that
         Train classifier on and and get reward for each instance
     end while
     for  do is the empirical mean reward
          for all Upper confidence bound for each simple arm
         Pick the label assignment such that for all
         Train classifier on and and get reward for each
     end for
     return label assignment such that for all and confidence scores such that
end function
Algorithm 1 Bandit label inference algorithm

Because the algorithm maintains a likelihood of each label for each instance, we can quantify the uncertainty of each inferred label by computing the difference in likelihood between the predicted label and its nearest competitor, which will be referred to as the confidence score. Using these confidence scores, we can devise a bootstrapping procedure, where the label inference algorithm is repeated multiple times with the most confident labels obtained in earlier passes serving as additional training data for the classifier in later passes. Confidence scores are also useful downstream when the inferred labels are further processed; for example, they can be used as weights when training the final classifier on the inferred labels.

We also extend the algorithm to enable us to try multiple possible labellings in parallel. So, instead of picking a single super arm in each iteration, we pick a sequence of super arms to pull in parallel. Because we don’t know the rewards we will receive from pulling the other super arms at the time of arm selection, each super arm is obtained by assuming the rewards that will be received from pulling super arms earlier in the sequence are equal to the empirical mean rewards observed so far. Empirically, we found that the set of super arms picked this way tends to be fairly diverse.

Finally, in order to obtain inferred labels for all weakly-labelled instances in a dataset, we use a variant of cross-validation, i.e.: we divide the dataset into folds and use one fold as the training set and the other folds as the held-out set for the label inference algorithm.

3.4 Comparison with Existing Approaches

The proposed method decouples the form of labelling constraints imposed by the weak supervision regime from the inner workings of the model – any strongly supervised classifier can be fed in as a black box and the classifier does not need to know about the form of weak supervision. This flexibility offers several advantages: first, any strongly supervised classifier can be ported to the weakly supervised setting without modification, thereby enabling one to easily take advantage of advances made in the strongly supervised setting. Second, an algorithm that works well in a particular weakly supervised setting can be easily extended to work in a different weakly supervised setting by way of changing the rewards. Consequently, one can incorporate additional side information and/or prior knowledge on the labellings without needing to concern oneself with possible optimization challenges this would introduce. In addition, a weakly supervised algorithm that works in the binary setting can be easily adapted to the multi-class setting by using a multi-class model and changing the labelling space and the rewards accordingly. In contrast, extending existing tightly coupled methods in the manners described above would be non-trivial.

Existing methods typically explore the labelling space in an iterative manner – in each iteration, they attempt to refine the current labelling in a way that reduces the loss. Because these methods explore the labelling space in a greedy manner, they are very sensitive to initialization. In practice, sophisticated task-specific initialization schemes must be developed in order to achieve good performance with these methods.

The proposed method takes a different approach. While searching for the best labelling, it maintains an estimate of the region of labelling space that appears promising, which is parameterized by the upper confidence bounds of each arm. This region covers the entire labelling space initially, shrinks over time and converges to the optimal labelling. By maintaining a region rather than a point, the method avoids missing a good labelling that has not yet been explored. By shrinking this region over time, the method avoids wasting time on exploring obviously incorrect labellings. In bandit terms, the method balances exploration with exploitation, enabling a thorough and efficient exploration of the labelling space. Viewed differently from the iterative perspective, the method does not “commit” to a labelling in any iteration and only treats the loss obtained in each iteration as one noisy signal, which will be combined with the signals obtained in previous iterations to choose the labelling to try in the next iteration. In practice, the method is able to arrive at a good labelling regardless of the initialization.

There is a sensible reason that existing methods are tightly coupled – because strongly supervised models are typically not robust in the presence of significant label noise, extending these to the weakly supervised setting requires learning the model and the labels jointly, so that the model learns to be robust to the modes of noise in the inferred labels. The proposed method is able to decouple label inference and model learning by using the sensitivity of the model to label noise as training signal. In other words, the model’s ability to discriminate the quality of the candidate labelling is used as training signal to improve the accuracy of the labelling. More concretely, if the candidate labelling is poor, then the model will generalize poorly to the held-out set, and so the rewards will be low; consequently, the proposed method will avoid generating similar candidate labellings in future iterations. Furthermore, unlike tightly coupled methods that optimize a loss that indirectly depends on the latent labelling, the proposed method directly optimizes the quality of inferred labels.

4 Experiments

4.1 Binary MIL

We first compare the proposed algorithm with existing approaches in the standard binary MIL regime. We use the datasets from [1], which arise from the drug activity prediction, image classification and text categorization settings and have become the standard benchmarks for evaluating MIL methods. For comparability with the methods introduced in [1], we used a vanilla SVM classifier with an RBF kernel to compute the rewards used by the proposed algorithm.

We choose a reward function that penalizes labellings that cannot be modelled by the classifier or cause violations of the MIL constraints on the held-out set. More formally, let denote the bag that contains instance , denote the label of bag , denote the label assigned to instance , denote the label of instance predicted by the strongly supervised classifier, denote the set of -nearest neighbours of instance in the held-out set in the space of the classifier output and denote the indicator function. The reward function for assigning the label to the instance is:

Conceptually, is the recall component that penalizes positive bags that don’t contain a positive instance, is the precision component that penalizes positive instances that appear in negative bags, is the minimum average recall level at which the precision reward starts to apply and balances the precision and recall components of the reward.

In our experiments, we used and . In accordance with [1], we perform 10-fold cross-validation and report the mean and standard deviation of the bag-level accuracy over ten runs in Table 1. As shown, the proposed method combined with a vanilla SVM classifier is able to achieve consistently competitive performance compared to methods designed specifically for the MIL regime. In particular, on some of the text categorization datasets, the proposed method outperforms existing methods by a non-negligible margin, which could possibly indicate that the proposed method is able to avoid getting stuck in the local optima that existing methods are trapped in.

Dataset EM-DD [22] mi-SVM with RBF kernel [1] MI-SVM with RBF kernel [1] BLISS+SVM w/ RBF kernel
Table 1: Bag-level accuracy over ten runs on standard binary MIL datasets

4.2 Multi-Class MIL

We consider a natural extension of the MIL regime to the multi-class setting. Rather than having a single binary label, each bag is now associated with a set of positive labels. It is known that a bag must contain at least one instance of each label in the label set and only instances whose labels are in the label set and negatively-labelled instances. Note that the binary setting is a special case of this multi-class setting where the label sets of positive and negative bags are the singleton set consisting of the unique positive label and the empty set respectively.

The proposed algorithm can be easily adapted to the multi-class setting by using a multi-class classifier and generalizing the reward function. Specifically, the reward function remains the same as in the binary case except for the following redefinitions:

where is now the label set of bag , is the prediction of a multi-class classifier, with denoting the negative class. We also redefine to be the set of -nearest neighbours of along the dimension of the classifier output corresponding to the predicted class of .

Because instances in the negative class typically have multiple modes and are often not linearly separable from the positive classes as a whole, we further extend the algorithm configuration above by introducing multiple negative labels. Since the mode each negative instance belongs to is unknown, we simply include all these negative labels in the set of possible labels for each instance. Because the reward function penalizes labellings that cannot be modelled by the classifier, if a linear classifier is used, only the negative instances that are linearly separable from the positive classes will be assigned the same negative label and so different negative labels tend to capture different modes. As a result, under this configuration, the algorithm is able to learn the different modes of the negative instances in an unsupervised manner.

We use a modified version of the softmax classifier that does not optimize for discrimination between different negative classes, which will be referred to as the cooperative softmax classifier and has the following objective:

In the formula above, is a disjoint and exhaustive grouping of classes such that classes within a set do not compete with each other. In our case, consists of a set containing all the negative classes and multiple singleton sets, each containing one positive class.

For our experiments, we constructed a multi-class MIL dataset from the MNIST handwritten digits dataset. Each bag in the constructed dataset contains between five and fifteen randomly chosen instances from the MNIST training set. Digits 0 to 4 are treated as positive classes, with the remaining classes combined into a single negative class. The label set of each bag reflects the presence of instances belonging to positive classes within the bag. The statistics of the constructed dataset are shown in Table 2.

Size of Label Set 0 1 2 3 4 5 Total
Number of Bags 31 322 1200 2024 1795 622 5994
Table 2: Statistics of multi-class MIL dataset constructed from MNIST

We ran the proposed algorithm under the configuration described above on this dataset, with the hyperparameters selected based on the bag-level recall and precision on the validation set. As a baseline, we also trained mi-SVM, which only works in the binary setting, in a one-vs-rest manner and compare it to the proposed algorithm. The instance-level accuracy achieved by both methods on the MNIST training and test sets as well as the accuracy of labels inferred by the proposed algorithm are reported in Table 3.

mi-SVM w/ linear kernel BLISS+Cooperative Softmax
Train Test Label Inference Train Test
Negative Classes (5 - 9) 29.2 31.6 50.3 46.3 47.9
Positive Class 0 98.6 99.1 96.8 96.9 97.9
Positive Class 1 98.9 99.5 96.1 97.6 98.1
Positive Class 2 89.7 91.0 89.1 88.3 89.1
Positive Class 3 92.4 93.6 89.6 89.7 92.9
Positive Class 4 96.6 96.6 95.1 95.2 95.9
Overall 62.9 64.7 72.3 70.4 72.0
Table 3: Instance-level accuracy on MNIST training and test sets

As shown, both methods have difficulty disentangling negative instances from positive instances, which is not surprising since only of the bags are negative bags, i.e.: those that contain only negative instances. However, the proposed algorithm produces far fewer false positives compared to mi-SVM while generating only slightly more false negatives, suggesting that it better models the negative instances. Overall, the proposed algorithm improves instance-level accuracy on the test set by compared to mi-SVM.

4.3 MIL with Domain-Specific Prior

We apply the proposed algorithm to the task of object detection, the goal of which is to predict the locations of bounding boxes of objects in a category of interest in an image. Because manually annotating the bounding boxes of objects in images is labour-intensive and costly, we would like to leverage the plethora of images with only image-level category labels that are available online to train an object detector. This naturally gives rise to a multi-class MIL problem, where the bags correspond to images and instances correspond to bounding boxes in images that could plausibly contain objects. Positive labels correspond to different foreground object categories and the negative label corresponds to background.

For comparability with existing MIL-based methods like [16], we used the same preprocessing pipeline to extract bounding box proposals from images in the PASCAL VOC 2007 dataset and compute features on the bounding boxes. The resulting training set consists of 5011 bags, each containing on the order of 2000 instances on average. Due to the size of the dataset, we first eliminate the obvious negative instances and split each bag with multiple positive labels into several smaller bags. We do so by running the proposed algorithm with the cooperative softmax classifier and multi-class MIL rewards, with the hyperparameters set to ensure high bag-level recall and reasonable bag-level precision. Then for each original bag, we construct smaller bags consisting of only the instances in the original bag that have the same positive inferred label, thereby reducing the multi-class MIL problem to a binary MIL problem. Next, we run the proposed algorithm again with the cooperative softmax classifier and binary MIL rewards augmented with a domain-specific prior that captures the intuition that if an instance is positive, there are instances in other positive bags that are similar and no instances in negative bags that are similar.

Formally, let denote the set of -nearest neighbours of in the bag in the space of classifier output, be the set of bags with the positive label , be a function that clips and normalizes the value of its parameter to lie within and denote the original unaugmented defined above. We use the following augmented reward function:

After labels are inferred, we take the most confident instance with a positive inferred label from each bag, which corresponds to a bounding box, and train an object detector on these inferred bounding boxes using the same procedure and hyperparameters as [16]. Figure 1 shows some instances with correct and incorrect inferred labels. As shown, the proposed algorithm is able to localize objects fairly accurately. In particular, the algorithm localizes faces very well, which is considered incorrect because the ground truth category is person. However, because faces and persons always co-occur, there is in fact no semantic difference between face and person when given only image-level labels. The inferred bounding boxes for bottle and dining table are roughly at the correct locations, but did not capture the full extent of the objects. We report average precision results achieved by the detector trained on the inferred bounding boxes in Table 4. As shown, the proposed algorithm with random initialization and a simple strongly supervised classifier was able to achieve competitive performance compared to [16], which used a sophisticated initialization scheme and a nontrivial extension of MI-SVM.

aero bike bird boat bottle bus car cat chair cow
MaxCover+SLSVM [16] 27.6 41.9 19.7 9.1 10.4 35.8 39.1 33.6 0.6 20.9
BLISS+Coop. Softmax 34.6 41.0 24.9 13.1 15.1 37.0 41.2 22.6 11.6 19.5
table dog horse mbike person plant sheep sofa train tv mAP
MaxCover+SLSVM [16] 10.0 27.7 29.4 39.2 9.1 19.3 20.5 17.1 35.6 7.1 22.7
BLISS+Coop. Softmax 4.5 20.9 25.5 34.8 2.1 15.9 14.2 20.1 38.2 23.7 23.0
Table 4: Object detection results on PASCAL VOC 2007 test set
Figure 1: Examples of correct and incorrect instances with positive inferred labels on PASCAL VOC 2007 trainval set. An instance with a positive inferred label is considered correct if the bounding box it is associated with overlaps with the ground truth bounding box by more than 50%, where overlap is defined as the intersection over union (IoU) between the bounding boxes.

5 Conclusion

We presented a general-purpose method for weakly supervised learning that can be applied to any weak supervision regime and enables any strongly supervised classifier to work in the weakly supervised setting. The proposed method decomposes any weakly supervised learning problem into a label inference problem and a strongly supervised learning problem and unifies the disparate weak supervision regimes by representing them simply as user-defined loss functions. We hope this work will encourage exploration of novel weak supervision regimes that are particularly suited for specific domains and enable performance gains achieved under one weakly supervised setting to be easily transferred to other weakly supervised settings.


  • [1] Stuart Andrews, Ioannis Tsochantaridis, and Thomas Hofmann. Support vector machines for multiple-instance learning. In Advances in neural information processing systems, pages 561–568, 2002.
  • [2] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002.
  • [3] Sébastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. arXiv preprint arXiv:1204.5721, 2012.
  • [4] Olivier Cappé, Aurélien Garivier, Odalric-Ambrym Maillard, Rémi Munos, Gilles Stoltz, et al. Kullback–leibler upper confidence bounds for optimal sequential allocation. The Annals of Statistics, 41(3):1516–1541, 2013.
  • [5] Shuo Chen, Bin Liu, Mingjie Qian, and Changshui Zhang. Kernel k-means based framework for aggregate outputs classification. In Data Mining Workshops, 2009. ICDMW’09. IEEE International Conference on, pages 356–361. IEEE, 2009.
  • [6] Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework, results and applications. In Proceedings of the 30th International Conference on Machine Learning, 2013.
  • [7] Thomas G Dietterich, Richard H Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence, 89(1):31–71, 1997.
  • [8] Nando De Freitas and Hendrik Kück. Learning about individuals from group statistics. In Uncertainty in Artificial Intelligence, pages 332–339, 2005.
  • [9] Glenn Fung, Murat Dundar, Balaji Krishnapuram, and R Bharat Rao. Multiple instance learning for computer aided diagnosis. Advances in neural information processing systems, 19:425, 2007.
  • [10] Fei Huang, Arun Ahuja, Doug Downey, Yi Yang, Yuhong Guo, and Alexander Yates. Learning representations for weakly supervised natural language processing tasks. Computational Linguistics, 40(1):85–120, 2014.
  • [11] Kuan-Ting Lai, Felix X Yu, Ming-Syan Chen, and Shih-Fu Chang. Video event detection by inferring temporal instance labels. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 2251–2258. IEEE, 2014.
  • [12] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985.
  • [13] Oded Maron and Tomás Lozano-Pérez. A framework for multiple-instance learning. Advances in neural information processing systems, pages 570–576, 1998.
  • [14] Novi Quadrianto, Alex J. Smola, Tibério S. Caetano, and Quoc V. Le. Estimating labels from label proportions. Journal of Machine Learning Research, 10:776–783, 2008.
  • [15] Stefan Rüping. SVM classifier estimation from group probabilities. In International Conference on Machine Learning, pages 911–918, 2010.
  • [16] Hyun O Song, Ross Girshick, Stefanie Jegelka, Julien Mairal, Zaid Harchaoui, and Trevor Darrell. On learning to localize objects with minimal supervision. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1611–1619, 2014.
  • [17] Jun Wang and Jean-Daniel Zucker. Solving multiple-instance problem: A lazy learning approach. In Proceedings of the 17th International Conference on Machine Learning, pages 1119–1125, 2000.
  • [18] Nils Weidmann, Eibe Frank, and Bernhard Pfahringer. A two-level learning method for generalized multi-instance problems. In Machine Learning: ECML 2003, pages 468–479. Springer, 2003.
  • [19] Xin Xu and Eibe Frank. Logistic regression and boosting for labeled bags of instances. In Advances in knowledge discovery and data mining, pages 272–281. Springer, 2004.
  • [20] FX Yu, D Liu, S Kumar, T Jebara, and SF Chang. SVM for learning with label proportions. In Proceedings of the 30rd International Conference on Machine learning, 2013.
  • [21] Min-Ling Zhang and Zhi-Hua Zhou. Improve multi-instance neural networks through feature selection. Neural Processing Letters, 19(1):1–10, 2004.
  • [22] Qi Zhang and Sally A Goldman. EM-DD: An improved multiple-instance learning technique. In Advances in neural information processing systems, pages 1073–1080, 2001.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description