Improving Semi-Supervised Support Vector MachinesThrough Unlabeled Instances Selection

Improving Semi-Supervised Support Vector Machines
Through Unlabeled Instances Selection

Yu-Feng Li    Zhi-Hua Zhou
Abstract

Semi-supervised support vector machines (S3VMs) are a kind of popular approaches which try to improve learning performance by exploiting unlabeled data. Though S3VMs have been found helpful in many situations, they may degenerate performance and the resultant generalization ability may be even worse than using the labeled data only. In this paper, we try to reduce the chance of performance degeneration of S3VMs. Our basic idea is that, rather than exploiting all unlabeled data, the unlabeled instances should be selected such that only the ones which are very likely to be helpful are exploited, while some highly risky unlabeled instances are avoided. We propose the S3VM-us method by using hierarchical clustering to select the unlabeled instances. Experiments on a broad range of data sets over eighty-eight different settings show that the chance of performance degeneration of S3VM-us is much smaller than that of existing S3VMs.

$\ast$$\ast$footnotetext: Corresponding author. Email: zhouzh@nju.edu.cn

Improving Semi-Supervised Support Vector Machines

Through Unlabeled Instances Selection

National Key Laboratory for Novel Software Technology

Nanjing University, Nanjing 210093, China


 

Key words: unlabeled data, performance degeneration, semi-supervised support vector machine

 

1 Introduction

In many real situations there are plentiful unlabeled training data while the acquisition of class labels is costly and difficult. Semi-supervised learning tries to exploit unlabeled data to help improve learning performance, particularly when there are limited labeled training examples. During the past decade, semi-supervised learning has received significant attention and many approaches have been developed chapelle2006ssl (); zhu2007semi (); Zhou:Li2010 ().

Among the many semi-supervised learning approaches, S3VMs (semi-supervised support vector machines) bennett1999sss (); Joachims1999 () are popular and have solid theoretical foundation. However, though the performances of S3VMs are promising in many tasks, it has been found that there are cases where, by using unlabeled data, the performances of S3VMs are even worse than SVMs simply using the labeled data zhang2000 (); chapelle2006ssl (); Chapelle2008 (). To enable S3VMs to be accepted by more users in more application areas, it is desirable to reduce the chances of performance degeneration by using unlabeled data.

In this paper, we focus on transductive learning and present the S3VM-us (S3VM with Unlabeled instances Selection) method. Our basic idea is that, given a set of unlabeled data, it may be not adequate to use all of them without any sanity check; instead, it may be better to use only the unlabeled instances which are very likely to be helpful while avoiding unlabeled instances which are with high risk. To exclude highly risky unlabeled instances, we first introduce two baselines, where the first baseline uses standard clustering technique motivated by the discernibility of density set SinghNIPS2008 () while the other one uses label propagation technique motivated by confidence estimation. Then, based on the analysis of the deficiencies of the two baseline approaches, we propose the S3VM-us method, which employs hierarchical clustering to help select unlabeled instances. Comprehensive experiments on a broad range of data sets over eighty-eight different settings show that, the chance of performance degeneration of S3VM-us is much smaller than that of TSVM Joachims1999 (), while the overall performance of S3VM-us is competitive with TSVM.

The rest of this paper is organized as follows. Section 2 briefly reviews some related work. Section 3 introduces two baseline approaches. Section 4 presents our S3VM-us method. Experimental results are reported in Section 5. The last section concludes this paper.

2 Related Work

Roughly speaking, existing semi-supervised learning approaches mainly fall into four categories. The first category is generative methods, e.g., Miller:Uyar1997 (); nigam2000text (), which extend supervised generative models by exploiting unlabeled data in parameter estimation and label estimation using techniques such as the EM method. The second category is graph-based methods, e.g., blum2001 (); zhu2003 (); zhou2004 (), which encode both the labeled and unlabeled instances in a graph and then perform label propagation on the graph. The third category is disagreement-based methods, e.g., blum1998 (); zhou2005tri (), which employ multiple learners and improve the learners through labeling the unlabeled data based on the exploitation of disagreement among the learners. The fourth category is S3VMs, e.g., bennett1999sss (); Joachims1999 (), which use unlabeled data to regularize the decision boundary to go through low density regions Chapelle2005 ().

Though semi-supervised learning approaches have shown promising performances in many situations, it has been indicated by many authors that using unlabeled data may hurt the performance nigam2000text (); zhang2000 (); Cozman2003 (); zhou2005tri (); chawla2005learning (); LaffertyNIPS2007 (); ben2008does (); SinghNIPS2008 (). In some application areas, especially the ones which require high reliability, users might be reluctant to use semi-supervised learning approaches due to the worry of obtaining a performance worse than simply neglecting unlabeled data. As typical semi-supervised learning approaches, S3VMs also suffer from this deficiency.

The usefulness of unlabeled data has been discussed theoretically LaffertyNIPS2007 (); ben2008does (); SinghNIPS2008 () and validated empirically chawla2005learning (). Many literatures indicated that unlabeled data should be used carefully. For generative methods, Cozman et al. Cozman2003 () showed that unlabeled data can increase error even in situations where additional labeled data would decrease error. One main conjecture on the performance degeneration is attributed to the difficulties of making a right model assumption which prevents the performance from degenerated by fitting with unlabeled data. For graph-based methods, more and more researchers recognize that graph construction is more crucial than how the labels are propagated, and some attempts have been devoted to using domain knowledge or constructing robust graphs balcan2005person (); Jebara2009 (). As for disagreement-based method, the generalization ability has been studied with plentiful theoretical results based on different assumptions blum1998 (); dasgupta2002pac (); wang2007 (); Wang:Zhou2010 (). As for S3VMs, the correctness of the S3VM objective has been studied on small data sets Chapelle2008 ().

It is noteworthy that though there are many work devoted to cope with the high complexity of S3VMs Joachims1999 (); collobert2006lst (); Chapelle2008 (); li2009means3vm (), there was no proposal on how to reduce the chance of performance degeneration by using unlabeled data. There was a relevant work which uses data editing techniques in semi-supervised learning li2005setred (); however, it tries to remove or fix suspicious unlabeled data during training process, while our proposal tries to select unlabeled instances for S3VM and SVM predictions after the S3VM and SVM have already been trained.

3 Two Baseline Approaches

As mentioned, our intuition is to use only the unlabeled data which are very likely to help improve the performance and keep the unlabeled data which are with high risk to be unexploited. In this way, the chance of performance degeneration may be significantly reduced. Current S3VMs can be regarded as an extreme case which believes that all unlabeled data are with low risk and therefore all of them should be used; while inductive SVMs which use labeled data only can be regarded as another extreme case which believes that all the unlabeled data are high risky and therefore only labeled data are used.

Specifically, we consider the following problem: Once we have obtained the predictions of inductive SVM and S3VM, how to remove risky predictions of S3VM such that the resultant performance could be often better and rarely worse than that of inductive SVM?

There are two simple ideas that are easy to be worked out to address the above problem, leading to two baseline approaches, namely S3VM-c and S3VM-p.

In the sequel, suppose we are given a training data set where denotes the set of labeled data and denotes the set of unlabeled data. Here is an instance and is the label. We further let and denote the predicted labels on by inductive SVM and S3VM, respectively.

3.1 S3vm-c

The first baseline approach is motivated by the analysis in SinghNIPS2008 () which suggests that unlabeled data help when the component density sets are discernable. Here, one can simulate the component density sets by clusters and discernibility by a condition of disagreements between S3VM and inductive SVM. We consider the disagreement using two factors, i.e., bias and confidence. When S3VM obtains the same bias as inductive SVM and enhances the confidence of inductive SVM, one should use the results of S3VM; otherwise it may be risky if we totally trust the prediction of S3VM.

Algorithm 1 gives the S3VM-c method and Figure 1(d) illustrates the intuition of S3VM-c. As can be seen, S3VM-c inherits the correct predictions of S3VM on groups while avoids the wrong predictions of S3VM on groups .

(a) (b) (c)
(d) (e) (f)
Figure 1: Illustration with artificial three-moon data. (a) Labeled data (empty and filled circles) and unlabeled data (gray points). The blocked numbers highlight groups of four unlabeled instances. Classification results of (b) Inductive SVM (using labeled data only); (c) S3VM; (d) S3VM-c, where each circle presents a cluster; (e) S3VM-p; (f) Our proposed S3VM-us.

Input: , , and parameter

1:  Perform partitional clustering (e.g., means) on . Denote as the data indices of each cluster respectively.
2:  For each cluster , calculate the label bias and confidence of SVM and S3VM according to:
3:  If , use the prediction of S3VM; otherwise use the prediction of SVM.
Algorithm 1   S3VM-c

3.2 S3vm-p

The second baseline approach is motivated by confidence estimation in graph-based methods, e.g., zhu2003 (), where the confidence can be naturally regarded as a risk measurement of unlabeled data.

Formally, to estimate the confidence of unlabeled data, let be the label matrix for labeled data where is the label vector. Let be the weight matrix of training data and is the laplacian of , i.e., where is a diagonal matrix with entries . Then, the predictions of unlabeled data can be obtained by zhu2003 ()

(1)

where is the sub-matrix of with respect to the block of unlabeled data, while is the sub-matrix of with respect to the block between labeled and unlabeled data. Then, assign each point with the label and the confidence . After confidence estimation, similar to S3VM-c, we consider the risk of unlabeled data by two factors, i.e., bias and confidence. If S3VM obtains the same bias of label propagation and the confidence is high enough, we use the S3VM prediction, and otherwise we take SVM prediction.

Algorithm 2 gives the S3VM-p method and Figure 1(e) illustrates the intuition of S3VM-p. As can be seen, the correct predictions of S3VM on groups are inherited by S3VM-p, while the wrong predictions of S3VM on groups are avoided.

Input: , , , and parameter

1:  Perform label propagation (e.g., zhu2003 ()) with , obtain the predicted label and confidence for each unlabeled instance , .
2:  Update according to
Let denote the number of nonnegative entries in .
3:  Sort , pick up the top- values and use the predictions of S3VM for the corresponding unlabeled instances, otherwise use the predictions of SVM.
Algorithm 2   S3VM-p
(a) (b)
(c) (d)
Figure 2: Illustration with artificial two-moon data when S3VM degenerates performance. (a) Labeled data (empty and filled circles) and unlabeled data (gray points). The blocked number highlight a group of four unlabeled instances. Classification results of (b) S3VM-c, where each circle presents a cluster; (c) S3VM-p; (d) Our proposed S3VM-us.

4 Our Proposed Method

4.1 Deficiencies of S3VM-c and S3VM-p

S3VM-c and S3VM-p are capable of reducing the chances of performance degeneration by using unlabeled data, however, they both suffer from some deficiencies. For S3VM-c, it works in a local manner and the relation between clusters are never considered, leading to the unexploitation of some helpful unlabeled instances, e.g., unlabeled instances in groups in Figure 2(d). For S3VM-p, as stated in wang2008graph (), the confidence estimated by label propagation approach might be incorrect if the label initialization is highly imbalanced, leading to the unexploitation of some useful unlabeled instances, e.g., groups in Figure 2(e).

Moreover, both S3VM-c and S3VM-p heavily rely on the predictions of S3VM, which might become a serious issue especially when S3VM obtains degenerated performance. Figures 2(b) and 2(c) illustrate the behaviors of S3VM-c and S3VM-p when S3VM degenerates performance. Both S3VM-c and S3VM-p erroneously inherit the wrong predictions of S3VM of group 1.

4.2 S3vm-us

The deficiencies of S3VM-c and S3VM-p suggest to take into account of cluster relation and make the method insensitive to label initialization. This motivates us to use hierarchical clustering jain1988algorithms (), leading to our proposed method S3VM-us.

Hierarchical clustering works in a greedy and iterative manner. It first initials each singe instance as a cluster and then at each step, it merges two clusters with the shortest distance among all pairs of clusters. In this step, the cluster relation is considered and moreover, since hierarchical clustering works in an unsupervised setting, it does not suffer from the label initialization problem.

Suppose and are the lengths of paths from the instance to its nearest positive and negative labeled instances, respectively, in hierarchical clustering. We simply take the difference between and as an estimation of the confidence on the unlabeled instance . Intuitively, the larger the difference between and , the higher the confidence on labeling .

Algorithm 3 gives the S3VM-us method and Figures 1(f) and 2 illustrate the intuition of S3VM-us. As can be seen, the wrong predictions of S3VM on groups are avoided by S3VM-us, the correct predictions of S3VM on groups are inherited, and S3VM-us does not erroneously inherit the wrong predictions of S3VM on group 1 in Figure 2.

Input: , , and parameter

1:  Let be a set of the unlabeled data such that .
2:  Perform hierarchical clustering, e.g., single linkage method jain1988algorithms ().
3:  For each unlabeled instance , calculate and , that is, the length of the paths from to its nearest positive and negative labeled instances, respectively. Denote .
4:  Let be the set of unlabeled instances in satisfying .
5:  If , predict the unlabeled instances in by S3VM and otherwise by SVM.
6:  Predict the unlabeled data by SVM.
Algorithm 3   S3VM-us

5 Experiments

5.1 Settings

We evaluate S3VM-us on a broad range of data sets including the semi-supervised learning benchmark data sets in chapelle2006ssl () and sixteen UCI data sets111http://archive.ics.uci.edu/ml/. The benchmark data sets are g241c, g241d, Digit1, USPS, TEXT and BCI. For each data, the archive222http://www.kyb.tuebingen.mpg.de/ssl-book/ provides two data sets with one using 10 labeled examples and the other using 100 labeled examples. As for UCI data sets, we randomly select 10 and 100 examples to be used as labeled examples, respectively, and use the remaining data as unlabeled data. The experiments are repeated for 30 times and the average accuracies and standard deviations are recorded. It is worth noting that in semi-supervised learning, labeled examples are often too few to afford a valid cross validation, and therefore hold-out tests are usually used for the evaluation.

In addition to S3VM-c and S3VM-p, we compare with inductive SVM and TSVM333http://svmlight. joachims.org/ Joachims1999 (). Both linear and Gaussian kernels are used. For the benchmark data sets, we follow the setup in chapelle2006ssl (). Specifically, for the case of 10 labeled examples, the parameter for SVM is fixed to where is the size of data set and the Gaussian kernel width is set to , i.e., the average distance between instances. For the case of labeled examples, is fixed to 100 and the Gaussian kernel width is selected from by cross validation. On UCI data sets, the parameter is fixed to 1 and the Gaussian kernel width is set to for 10 labeled examples. For 100 label examples, the parameter is selected from and the Gaussian kernel width is selected from by cross validation. For S3VM-c, the cluster number is fixed to 50; for S3VM-p, the weighted matrix is constructed via Gaussian distance and the parameter is fixed to 0.1; for S3VM-us, the parameter is fixed to 0.1.

5.2 Results

Data SVM TSVM S3VM-c S3VM-p S3VM-us
( linear / gaussian ) ( linear / gaussian ) ( linear / gaussian ) ( linear / gaussian ) ( linear / gaussian )
BCI 50.71.5 / 52.72.7 49.32.8 / 51.42.7 50.22.0 / 52.22.6 50.61.6 / 52.62.7 50.91.6 / 52.62.7
g241c 53.24.8 / 53.04.5 78.94.7 / 78.55.0 55.28.3 / 55.38.8 53.95.8 / 53.65.3 53.54.8 / 53.24.5
g241d 54.45.4 / 54.55.2 53.67.8 / 53.26.5 53.85.4 / 53.65.0 54.15.3 / 54.05.2 54.45.3 / 54.45.2
digit1 55.410.9 / 75.07.9 79.41.1 / 81.53.1 56.112.2 / 77.38.2 56.212.2 / 75.08.1 58.19.6 / 75.17.8
USPS 80.00.1 / 80.71.8 69.41.2 / 73.02.6 80.00.1 / 80.42.5 80.00.1 / 80.52.1 80.00.1 / 80.71.8
Text 54.76.3 / 54.66.3 71.411.7 / 71.211.4 56.88.8 / 56.58.7 55.36.6 / 55.26.8 58.09.0 / 57.88.9
house 90.06.0 / 84.811.8 84.68.0 / 84.76.9 89.86.2 / 84.811.9 89.56.0 / 84.511.8 90.16.1 / 85.411.4
heart 58.810.5 / 63.911.6 72.412.6 / 72.610.4 59.010.8 / 64.411.6 58.610.6 / 63.811.7 61.99.7 / 65.111.0
heart-statlog 74.64.8 / 69.910.1 74.96.6 / 73.95.9 74.55.2 / 70.110.2 74.54.9 / 70.010.2 74.25.4 / 71.76.9
ionosphere 70.48.7 / 65.89.8 72.010.5 / 76.18.2 70.99.0 / 66.19.9 70.48.7 / 66.09.7 70.78.3 / 67.46.7
vehicle 73.28.9 / 58.39.5 72.19.4 / 63.27.8 73.59.4 / 58.49.6 72.69.1 / 58.09.5 74.59.3 / 64.29.1
house-votes 85.57.0 / 79.710.7 83.86.1 / 84.05.3 85.77.0 / 80.110.6 85.36.9 / 79.710.7 86.05.7 / 84.36.1
wdbc 65.67.5 / 73.810.3 90.06.1 / 88.93.7 65.77.8 / 74.910.9 66.18.0 / 73.910.5 65.87.5 / 73.910.3
clean1 58.24.2 / 53.56.2 57.05.1 / 53.34.8 57.84.4 / 53.36.2 58.54.2 / 53.36.3 58.24.2 / 55.08.1
isolet 93.84.3 / 82.015.7 84.210.9 / 86.79.5 94.55.1 / 83.216.0 93.04.7 / 81.715.7 93.74.3 / 84.112.6
breastw 93.94.8 / 92.310.1 89.28.6 / 88.98.8 94.24.9 / 92.410.0 93.94.9 / 92.210.0 93.65.4 / 92.49.9
australian 70.49.2 / 60.38.4 69.611.9 / 68.611.4 70.19.8 / 60.48.3 70.59.4 / 60.58.8 70.39.2 / 60.87.9
diabetes 63.36.9 / 66.33.5 63.47.6 / 65.84.6 63.26.8 / 65.93.0 63.46.6 / 66.23.4 63.36.9 / 66.33.5
german 65.24.9 / 65.112.0 63.75.6 / 63.55.1 65.64.7 / 65.111.8 65.64.8 / 65.111.9 65.25.0 / 65.311.6
optdigits 96.13.2 / 92.89.6 89.89.2 / 91.47.6 96.63.1 / 93.69.9 95.63.0 / 92.49.8 96.92.5 / 94.95.8
ethn 56.58.8 / 58.510.2 64.213.5 / 68.114.5 56.58.6 / 59.411.6 56.89.1 / 58.610.7 59.810.7 / 61.811.3
sat 95.84.1 / 87.510.9 85.511.4 / 86.510.8 96.34.1 / 87.711.2 94.84.2 / 86.910.8 96.43.9 / 90.78.1
Aver. Acc. 70.9 / 69.3 73.5 / 73.8 71.2 / 69.8 70.9 / 69.3 71.6 / 70.8
 SVM vs. Semi-Supervised: W/T/L 18/18/8 14/29/1 7/25/12 12/32/0
Table 1: Accuracy (mean std.) on 10 labeled examples. ‘SVM’ denotes inductive SVM which uses labeled data only. For the semi-supervised methods (TSVM, S3VM-c, S3VM-p and S3VM-us), if the performance is significantly better/worse than SVM, the corresponding entries are bolded/underlined (paired -tests at 95% significance level). The win/tie/loss counts with the fewest losses are bolded.
Data SVM TSVM S3VM-c S3VM-p S3VM-us
( linear / gaussian ) ( linear / gaussian ) ( linear / gaussian ) ( linear / gaussian ) ( linear / gaussian )
BCI 61.12.6 / 65.93.1 56.42.8 / 65.62.5 58.32.6 / 65.63.0 60.32.5 / 65.83.0 61.02.7 / 65.83.1
g241c 76.32.0 / 76.62.1 81.71.6 / 82.11.2 79.31.7 / 79.61.8 77.22.1 / 77.12.0 76.32.0 / 76.62.1
g241d 74.21.9 / 75.41.8 76.18.5 / 77.97.4 77.43.5 / 78.53.3 74.82.3 / 75.72.2 74.21.9 / 75.41.8
digit1 50.31.2 / 94.01.4 81.93.0 / 94.02.0 50.31.2 / 95.01.5 50.31.2 / 94.11.4 67.91.3 / 94.11.4
USPS 80.00.2 / 91.71.1 78.82.0 / 90.91.4 80.00.2 / 92.51.0 80.00.2 / 91.61.2 80.10.4 / 91.81.1
Text 73.83.3 / 73.73.6 77.71.6 / 77.71.7 75.33.4 / 75.23.6 73.93.4 / 73.83.7 74.13.1 / 74.23.3
house 95.72.0 / 95.61.6 94.42.5 / 94.82.6 95.51.8 / 95.41.8 95.62.0 / 95.51.7 95.62.0 / 95.61.6
heart 81.52.5 / 80.12.4 80.73.1 / 79.52.9 81.13.0 / 79.82.5 81.52.5 / 80.22.5 81.52.6 / 80.12.4
heart-statlog 81.52.4 / 81.42.7 81.62.7 / 79.04.5 81.22.2 / 80.73.0 81.52.4 / 81.22.7 81.52.4 / 81.32.7
ionosphere 87.11.5 / 93.21.6 85.62.1 / 92.12.3 88.71.3 / 93.41.5 87.11.5 / 93.21.6 87.11.5 / 93.21.6
vehicle 92.91.7 / 95.41.4 91.62.5 / 95.42.3 93.31.6 / 95.91.3 92.81.7 / 95.21.5 93.01.7 / 95.51.4
house-votes 92.31.3 / 92.81.2 92.01.8 / 93.01.4 92.61.2 / 92.91.2 92.31.3 / 92.81.2 92.31.3 / 92.81.2
clean1 73.02.7 / 80.63.0 73.23.1 / 79.13.4 73.72.9 / 79.92.9 73.22.6 / 80.43.2 73.12.7 / 80.73.0
wdbc 95.60.8 / 94.70.9 94.32.3 / 94.12.4 95.80.7 / 94.90.9 95.60.8 / 94.70.9 95.60.8 / 94.80.9
isolet 99.20.4 / 99.00.6 95.93.1 / 98.22.3 99.20.4 / 99.20.5 99.00.4 / 98.90.6 99.20.4 / 99.10.5
breastw 96.40.4 / 96.70.4 96.91.9 / 97.10.5 96.60.4 / 96.90.4 96.30.4 / 96.70.4 96.40.4 / 96.70.4
australian 83.81.6 / 84.91.7 82.52.6 / 84.62.7 83.81.7 / 85.01.6 83.91.7 / 85.01.8 83.81.7 / 85.01.7
diabetes 75.21.7 / 74.71.9 72.32.3 / 71.81.8 74.91.7 / 74.22.2 75.31.6 / 74.71.9 75.21.8 / 74.71.9
german 67.12.4 / 72.01.5 66.12.1 / 65.93.4 67.12.2 / 71.61.5 67.62.3 / 72.11.4 67.12.4 / 72.11.5
optdigits 99.40.3 / 99.40.3 95.93.7 / 97.43.1 99.50.4 / 99.50.3 99.20.4 / 99.20.4 99.50.3 / 99.40.3
ethn 91.61.6 / 93.41.2 92.62.3 / 93.43.0 93.91.6 / 95.01.2 91.91.5 / 93.31.2 91.71.5 / 93.41.2
sat 99.70.2 / 99.70.1 96.42.8 / 97.62.7 99.70.2 / 99.80.1 99.50.3 / 99.50.3 99.70.2 / 99.70.1
Aver. Acc. 83.0 / 86.8 83.9 / 86.4 83.5 / 87.3 83.1 / 86.8 83.9 / 86.9
 SVM vs. Semi-Supervised: W/T/L 7/18/19 21/16/7 8/25/11 8/36/0
Table 2: Accuracy (mean std.) on 100 labeled examples. ‘SVM’ denotes inductive SVM which uses labeled data only. For the semi-supervised methods (TSVM, S3VM-c, S3VM-p and S3VM-us), if the performance is significantly better/worse than SVM, the corresponding entries are bolded/underlined (paired -tests at 95% significance level). The win/tie/loss counts with the fewest losses are bolded.

The results are shown in Tables 1 and 2. As can be seen, the performance of S3VM-us is competitive with TSVM. In terms of average accuracy, TSVM performs slightly better (worse) than S3VM-us on the case of 10 (100) labeled examples. In terms of pairwise comparison, S3VM-us performs better than TSVM on 13/12 and 14/16 cases with linear/Gaussian kernel for 10 and 100 labeled examples, respectively. Note that in a number of cases, TSVM has large performance improvement against inductive SVM, while the improvement of S3VM-us is smaller. This is not a surprise since S3VM-us tries to improve performance with the caution of avoiding performance degeneration.

Though TSVM has large improvement in a number of cases, it also has large performance degeneration in cases. Indeed, as can be seen from Tables 1 and 2, TSVM is significantly inferior to inductive SVM on 8/44, 19/44 cases for 10 and 100 labeled examples, respectively. Both S3VM-c and S3VM-p are capable to reduce the times of significant performance degeneration, while S3VM-us does not significantly degenerate performance in the experiments.

5.3 Parameter Influence

Figure 3: Influence of the parameter on the improvement of S3VM-us against inductive SVM.

S3VM-us has a parameter . To study the influence of , we run experiments by setting to different values (0.1, 0.2 and 0.3) with 10 labeled examples. The results are plotted in Figure 3. It can be seen that the setting of has influence on the improvement of S3VM-us against inductive SVM. Whatever linear kernel or gaussian kernel is used, the larger the value of , the closer the performance of S3VM-us to SVM. It may be possible to increase the performance improvement by setting a smaller , however, this may increase the risk of performance degeneration.

6 Conclusion

In this paper we propose the S3VM-us method. Rather than simply predicting all unlabeled instances by semi-supervised learner, S3VM-us uses hierarchical clustering to help select unlabeled instances to be predicted by semi-supervised learner and predict the remaining unlabeled instances by inductive learner. In this way, the risk of performance degeneration by using unlabeled data is reduced. The effectiveness of S3VM-us is validated by empirical study.

The proposal in this paper is based on heuristics and theoretical analysis is future work. It is worth noting that, along with reducing the chance of performance degeneration, S3VM-us also reduces the possible performance gains from unlabeled data. In the future it is desirable to develop really safe semi-supervised learning approaches which are able to improve performance significantly but never degenerate performance by using unlabeled data.

References

  • [1] M. F. Balcan, A. Blum, P. P. Choi, J. Lafferty, B. Pantano, M. R. Rwebangira, and X. Zhu. Person identification in webcam images: An application of semi-supervised learning. In ICML Workshop on Learning with Partially Classified Training Data, 2005.
  • [2] S. Ben-David, T. Lu, and D. Pál. Does unlabeled data provably help? Worst-case analysis of the sample complexity of semi-supervised learning. In COLT, pages 33–44, 2008.
  • [3] K. Bennett and A. Demiriz. Semi-supervised support vector machines. In NIPS 11, pages 368–374. 1999.
  • [4] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In ICML, pages 19–26, 2001.
  • [5] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, pages 92–100, 1998.
  • [6] O. Chapelle, B. Schölkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006.
  • [7] O. Chapelle, V. Sindhwani, and S. S. Keerthi. Optimization techniques for semi-supervised support vector machines. J. Mach. Learn. Res., 9:203–233, 2008.
  • [8] O. Chapelle and A. Zien. Semi-supervised learning by low density separation. In AISTATS, pages 57–64, 2005.
  • [9] N. V. Chawla and G. Karakoulas. Learning from labeled and unlabeled data: An empirical study across techniques and domains. J. Artif. Intell. Res., 23:331–366, 2005.
  • [10] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Large scale transductive SVMs. J. Mach. Learn. Res., 7:1687–1712, 2006.
  • [11] F. G. Cozman, I. Cohen, and M. C. Cirelo. Semi-supervised learning of mixture models. In ICML, pages 99–106, 2003.
  • [12] S. Dasgupta, M. L. Littman, and D. McAllester. PAC generalization bounds for co-training. In NIPS 14, pages 375–382. 2002.
  • [13] A.K. Jain and R.C. Dubes. Algorithms for Clustering Data. Prentice Hall, Englewood Cliffs, NJ., 1988.
  • [14] T. Jebara, J. Wang, and S. F. Chang. Graph construction and b-matching for semi-supervised learning. In ICML, pages 441–448, 2009.
  • [15] T. Joachims. Transductive inference for text classification using support vector machines. In ICML, pages 200–209, 1999.
  • [16] J. Lafferty and L. Wasserman. Statistical analysis of semi-supervised regression. In NIPS 20, pages 801–808. 2008.
  • [17] M. Li and Z. H. Zhou. SETRED: Self-training with editing. In PAKDD, pages 611–621, 2005.
  • [18] Y.-F. Li, J. T. Kwok, and Z.-H. Zhou. Semi-supervised learning using label mean. In ICML, pages 633–640, 2009.
  • [19] D. J. Miller and H. S. Uyar. A mixture of experts classifier with learning based on both labelled and unlabelled data. In NIPS 9, pages 571–577. 1997.
  • [20] K. Nigam, A. K. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and unlabeled documents using EM. Mach. Learn., 39(2):103–134, 2000.
  • [21] A. Singh, R. Nowak, and X. Zhu. Unlabeled data: Now it helps, now it doesn’t. In NIPS 21, pages 1513–1520. 2009.
  • [22] J. Wang, T. Jebara, and S. F. Chang. Graph transduction via alternating minimization. In ICML, pages 1144–1151, 2008.
  • [23] W. Wang and Z.-H. Zhou. Analyzing co-training style algorithms. In ECML, pages 454–465, 2007.
  • [24] W. Wang and Z.-H. Zhou. A new analysis of co-training. In ICML, pages 1135–1142, 2010.
  • [25] T. Zhang and F. Oles. The value of unlabeled data for classification problems. In ICML, pages 1191–1198, 2000.
  • [26] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Scholkopf. Learning with local and global consistency. In NIPS 16, pages 595–602. 2004.
  • [27] Z.-H. Zhou and M. Li. Tri-training: Exploiting unlabeled data using three classifiers. IEEE Trans. Knowl. Data Eng., 17(11):1529–1541, 2005.
  • [28] Z.-H. Zhou and M. Li. Semi-supervised learning by disagreement. Knowl. Inf. Syst., 24(3):415–439, 2010.
  • [29] X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, Dept. Comp. Sci., Univ. Wisconsin-Madison, 2006.
  • [30] X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, pages 912–919, 2003.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
368627
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description