Semi-Supervised Deep Learning Using Improved Unsupervised Discriminant ProjectionThis work is supported by NSFC China (61806125, 61802247, 61876107) and Startup Fund for Youngman Research at SJTU (SFYR at SJTU). Authors with * make equal contributions. Corresponding author: Enmei Tu (hellotem@hotmail.com)

Semi-Supervised Deep Learning Using Improved Unsupervised Discriminant Projection1

Abstract

Deep learning demands a huge amount of well-labeled data to train the network parameters. How to use the least amount of labeled data to obtain the desired classification accuracy is of great practical significance, because for many real-world applications (such as medical diagnosis), it is difficult to obtain so many labeled samples. In this paper, modify the unsupervised discriminant projection algorithm from dimension reduction and apply it as a regularization term to propose a new semi-supervised deep learning algorithm, which is able to utilize both the local and nonlocal distribution of abundant unlabeled samples to improve classification performance. Experiments show that given dozens of labeled samples, the proposed algorithm can train a deep network to attain satisfactory classification results.

Keywords:
Manifold Regularization Semi-supervised learning Deep learning.

1 Introduction

In reality, one of the main difficulties faced by many machine learning tasks is manually tagging large amounts of data. This is especially prominent for deep learning, which usually demands a huge number of well-labeled samples. Therefore, how to use the least amount of labeled data to train a deep network has become an important topic in the area. To overcome this problem, researchers proposed that the use of a large number of unlabeled data can extract the topology of the overall data’s distribution. Combined with a small amount of labeled data, the generalization ability of the model can be significantly improved, which is the so-called semi-supervised learning [5, 21, 18].

Recently, semi-supervised deep learning has made some progress. The main ideas of existing works broadly fall into two categories. One is generative model based algorithms, for which unlabeled samples help the generative model to learn the underly sample distribution for sample generation. Examples of this type algorithms include CatGAN [15], BadGAN [7], variational Bayesian [10], etc. The other is discriminant model based algorithms, for which the role of the unlabeled data may provide sample distribution information to prevent model overfitting , or to make the model more resistant to disturbances. Typical algorithms of this type include unsupervised loss regularization [16, 1], latent feature embedding [18, 20, 8, 14], pseudo label [11, 19]. Our method belongs to the second category, in which an unsupervised regularization term, which captures the local and global sample distribution characteristics, is added to the loss function for semi-supervised deep learning.

The proposed algorithm is based on the theory of manifold regularization, which is developed by Belkin et al.[3, 4] and then introduced into deep learning by Weston et al. [18]. Given labeled samples and their corresponding labels , recall that manifold regularization combines the idea of manifold learning with the idea of semi-supervised learning, and learns the manifold structure of data with a large amount of unlabeled data, which gets the model better generalization. Compared to the loss function in tradition supervised learning framework, the manifold regularization based semi-supervised learning algorithm adds a new regularization term to penalize the complexity of the discriminant function over the sample distribution manifold, as shown in the equation (1):

(1)

where is an arbitrary supervised loss term, and is a kernel norm, such as a Gaussian kernel function, that penalizes the model complexity in the ambient (data) space. is the introduced manifold regularization term, which penalizes model complexity along the data distribution manifold to make sure that the prediction output have the same distribution as the input data. and are used as weights. As shown in Fig. 1, after the manifold regularization term is introduced, the decision boundary tries not to destroy the manifold structure of the data distribution and meanwhile, keeps itself as simple as possible, so that the boundary finally passes through where the data is sparsely distributed.

Figure 1: Manifold regularization makes the decision boundary where the data distribution is sparse. Left: traditional supervised learning results; right: manifold regularized semi-supervised learning.

However, the research on the application of manifold regularization in the field of semi-supervised deep learning has not been fully explored. The construction of manifold regularization only considers the local structural relationship of samples. For classification problems, we should not only preserve the positional relationship of neighbor data to ensure clustering, but also consider distinguishing data from different manifolds and separating them in the embedded space. Therefore, in this paper, we propose a novel manifold loss term based on the improved Unsupervised Discriminant Projection (UDP) [9], which incorporates both local and nonlocal distribution information, and we conduct experiments on real-world datasets to demonstrate that it can produce better classification accuracy for semi-supervised deep learning than its counterparts.

The following contents are organized as follows: The theory and the proposed algorithm are presented in Section 2; then the experimental results are given in Section 3, followed by conclusions and discussions in Section 4.

2 Improved UDP Regularization Term

In this section, we first review the UDP algorithm and then introduce an improved UDP algorithm. Then we propose a semi-supervised deep learning algorithm which is based on the improved UDP algorithm.

2.1 Basic idea of UDP

The UDP method is proposed by Yang et al. originally for dimensionality reduction of small-scale high-dimensional data [9]. As a method for multi-manifold learning, UDP considers both local and non-local quantities of the data distribution. The basic idea of UDP is shown in Fig. 2. Suppose that the data is distributed on two elliptical manifolds denoted by and , respectively. If we only require that the distances of neighboring data are still close after being projected along a certain direction, then the projection along will be the optimal direction, but at this time the two data clusters will be mixed with each other and difficult to separate after projection. Therefore, while requiring neighbor data to be sufficiently close after projection, we should also optimize the direction of the projection so that the distance between different clusters is as far as possible. Such projected data are more conducive to clustering after dimensionality reduction.

Figure 2: Illustration of clusters of two-dimensional data and optimal projection directions [9].

For this reason, UDP uses the ratio of local scatter to non-local scatter, to find a projection which will draw the close data closer, while simultaneously making the distant data even more distant from each other. The local scatter can be characterized by the mean square of the Euclidean distance between any pair of the projected sample points that are neighbors. The criteria for judging neighbors can be -nearest neighbors or neighbors. Since the value of is difficult to determine and it may generate an unconnected graph, the -nearest neighbor criterion is used here to define the weighted adjacency matrix with kernel weighting:

(2)

Then given a training set containing samples , denote the local set . After projecting and onto a direction , we get their images and . The local scatter is defined as

(3)

Similarly, the nonlocal scatter can be defined by the mean square of the Euclidean distance between any pair of the projected sample points that are not in any set of neighborhoods. It is defined as

(4)

The optimal projection vector minimizes the following final objective function

(5)

2.2 An improved UDP for large scale dimension reduction

Since the original UDP method is developed for dimensionality reduction of small-scale data sets, the data outside the -nearest neighbors of a sample are regarded as nonlocal data and participate in the calculation of a nonlocal scatter. However, when the scale of training data is large, this way of calculating the nonlocal scatter will bring a prohibitive computational burden, because each sample has nonlocal data. To overcome this problem, we propose an improved UDP for large scale dimension reduction.

Suppose there are training data ,and the desired output of after dimension reduction is . Using the Euclidean distance as a measure, similar to the definition of the -nearest neighbor set, we define a set of -distant data set . Similarly, we define a non-adjacency matrix :

(6)

Then we define the distant scatter as

(7)

for the local scatter , we use the same one as the original UDP. So the objective function of the improved UDP is

{align}

J_R(w)= & JLJD
= & ∑_i=1^M∑jUKHij\lVertyi-yj\rVert22b DNWib\lVertyi-yb\rVert22

The improved UDP also requires that after the mapping of the deep network, the outputs of similar data is as close as possible, while simultaneously “pushing away” the output of dissimilar data. Although only the data with extreme distance is used, in the process of making the dissimilar data far away from each other, the data similar to them will gather around them respectively, thus widening the distance between the classes and making the sparse area of data distribution more sparse, densely areas denser.

2.3 The improved UDP based semi-supervised deep learning

Suppose we have a dataset , in which the first data points are labeled samples with labels , and the rest data points are unlabeled samples. Let be the embeddings of the samples through a deep network. Our aim is to train a deep network using both labeled and unlabeled samples, such that different classes are well separated and meanwhile, cluster structures are well preserved. Putting all together, we have the following objective function

(8)

where is the number of labeled data and is the number of unlabeled data. is the supervised loss function and is the UDP regularization term. is the hyperparameter, which is used to balance the supervisory loss and unsupervised loss. We use the softmax function as our supervised loss, but other type of loss function (e.g. mean square error) are also applicable.

We use error backpropagation (BP) to train the network. The details of the training process are given in the following algorithm.

1:labeled data and corresponding label , unlabeled data , output of neural network , output of the embedded UDP regularization item
2:Find -nearest neighbors and -distant samples of each sample
3:Calculated the kernel weights for neighbors and for distant samples
4:repeat
5:     Randomly select a group of labeled data and their labels
6:     Gradient descend
7:     Select and its -nearest data and -distant data
8:     Gradient descend
9:until 
10:Meet accuracy requirements or complete all iterations
Algorithm 1 Semi-supervised deep learning based on improved UDP

3 Experimental Results

3.1 Results of dimensionality reduction

Firstly, we test the dimensionality reduction performance of the improved UDP method in two different image datasets, MNIST and ETH-807. Then we compare the improved UDP with original UDP, as well as several popular dimension reduction algorithms (Isomap [2], Multidimensional scaling (MDS) [6], t-SNE [13] and spectral embedding [12]), to show its performance improvement.

MNIST is a dataset consisting of grayscale images of handwritten digits. We randomly selected 5000 samples from the dataset to perform our experiments because the original UDP usually applies to small-scale datasets. ETH-80 is a small-scale but more challenging dataset which consists of RGB images from 8 categories. We use all the 820 samples from “apples” and “pears” categories and convert the images from RGB into grayscale for manipulation convenience. The parameters of the baseline algorithms are set to their suggested default values and the parameters (kernel width , number of nearest neighbors and number of farthest points ) of the improved UDP are set empirically. The experimental results on the two datasets are shown in Fig. 3 and Fig. 4, respectively.

Figure 3: Dimension reduction of digits 1 and 2 in MNIST ( and ).
Figure 4: Dimension reduction of ‘apples’ and ‘pears’ categories in ETH-80 ( and ).

From these results we can see that after dimension reduction, the improved UDP maps different classes more separately than the original UDP on both datasets. This is important because while adopting the new UDP into semi-supervised learning in equation (8), better separation means more accurate classification. It is also worth mentioning that although on the ETH-80 dataset, the improved UDP achieves comparable results as the rest baseline algorithms, its results on MNIST is much better (especially than MDS, Isomap) in terms of classes separation.

To quantitatively measure the classes separation, Table 1 shows the cluster purity given by k-means clustering algorithm on these two datasets after dimensionality reduction. The purity is calculated based maximum matching degree [17] after clustering.

Method MNIST ETH-80
UDP 81.7 77.7
Improved UDP 93.8 99.4
Isomap [2] 86.1 99.9
MDS [6] 53.7 98.9
t-SNE [13] 93.1 100.0
Spectral Embedding [12] 98.6 100.0
Table 1: Cluster purity of 6 methods.

Table 1 demonstrates that our improved UDP method improves the cluster purity by a large margin compared to the original UDP. It can also be seen from Fig. 3 and Fig. 4 that our improved UDP method is more appropriate for clustering than original UDP. Furthermore, our method is more efficient than the original UDP because we do not have to calculate a fully connected graph. What we need are the kernel weights of the neighbors and distant data. On both datasets, our method gets much better (on MNIST) or competitive results with other dimension reduction methods.

3.2 Results of classification

We conduct experiments on MNIST dataset and SVHN dataset8 to compare the proposed algorithm with the supervised deep learning (SDL) and Manifold Regularization (MR) semi-supervised deep learning [18]. The number of labeled data for MNIST dataset is set to 100, combined with 2000 unlabeled data, to train a deep network. For SVHN, from the training set we randomly selected 1000 samples as labeled data and 20000 samples as unlabeled data to train a network. For both experiments, we test the trained network on the testing set (of size 10000 in MNIST and 26032 in SVHN ) to obtain testing accuracy. The optimizer we choose Adam. The parameters are manually tuned using a simple grid search rule. and take 10 and 50 and kernel width is within .

We adopt the three embedding network structures described in [18] and the results of MNIST and SVHN are shown in Table 2. For supervised deep learning, we apply entropy loss at the network output layer only, since middle layer embedding and auxiliary network do not make any sense. From the table we can see, MR is better for middle layer embedding. Our method is better for output embedding and auxiliary network embedding and achieves better classification results for most network structures. The results also suggest that it may be helpful to combine MR with UDP together, using MR for hidden layer and UDP for output layer9.

number of labled data MNIST SVHN
SDL MR Improved UDP SDL MR Improved UDP
Output layer embedding 74.31 82.95 83.19 55.21 64.70 72.66
Middle layer embedding - 83.52 83.07 - 72.10 69.35
Auxiliary neural network - 87.55 87.79 - 62.61 71.32
Table 2: Classification correct rate.

4 Conclusions and Future Work

Training a deep network using a small number of labeled samples is of great practical significance, since many real-world applications have big difficulties to collect enough labeled samples. In this paper, we modify the unsupervised discriminant projection (UDP) algorithm to make it suitable for large data dimension reduction and semi-supervised learning. The new algorithm simultaneously takes both local and nonlocal manifold information into account and meanwhile, could reduce the computational cost. Based on this, we proposed a new semi-supervised deep learning algorithm to train a deep network with a very small amount of labeled samples and many unlabeled samples. The experimental results on different real-world datasets demonstrate its validity and effectiveness.

The construction of the neighbor graph is based on Euclidean distance in data space, which may not be a proper distance measure on data manifold. In the future, other neighbor graph construction methods, such as the measure on the Riemannian manifold, will be tried. The limitation of the current method is that it can attain good results for tasks that are not too complex, such as MNIST, but for more challenging classification datasets, such as CIFAR10, which the direct nearest neighbors may not reflect the actual similarity, the method may not perform very well. Our future work will try to use some pre-learning techniques, such as auto-encoder or kernel method, to map origin data to a much concise representation.

Footnotes

  1. thanks: This work is supported by NSFC China (61806125, 61802247, 61876107) and Startup Fund for Youngman Research at SJTU (SFYR at SJTU). Authors with * make equal contributions. Corresponding author: Enmei Tu (hellotem@hotmail.com)
  2. email: hanxiao2015@sjtu.edu.cn, wangzihao33@sjtu.edu.cn, hellotem@hotmail.com
  3. email: hanxiao2015@sjtu.edu.cn, wangzihao33@sjtu.edu.cn, hellotem@hotmail.com
  4. email: hanxiao2015@sjtu.edu.cn, wangzihao33@sjtu.edu.cn, hellotem@hotmail.com
  5. email: hanxiao2015@sjtu.edu.cn, wangzihao33@sjtu.edu.cn, hellotem@hotmail.com
  6. email: hanxiao2015@sjtu.edu.cn, wangzihao33@sjtu.edu.cn, hellotem@hotmail.com
  7. ETH-80:https://github.com/Kai-Xuan/ETH-80
  8. SVHN the The Street View House Numbers (SVHN) Dataset (http://ufldl.stanford.edu/housenumbers/), which consists of color images for real-world house number digits with various appearance and is a highly challenging classification problem.
  9. We leave this to our future work. We should also point out that although the classification accuracies are somehow lower than the state-of-the-art results, the network we employed is a traditional multilayer feedforward network and we do not utilize any advanced training techniques such as batch-normalization, random data augmentation. In the future, we will try to train a more complex network with advanced training techniques to make thorough comparisons.

References

  1. P. Bachman, O. Alsharif and D. Precup (2014) Learning with pseudo-ensembles. Advances in Neural Information Processing Systems 4, pp. 3365–3373. Cited by: §1.
  2. M. Balasubramanian and E. L. Schwartz (2002) The isomap algorithm and topological stability. Science 295 (5552), pp. 7–7. Cited by: §3.1, Table 1.
  3. M. Belkin, P. Niyogi, V. Sindhwani and P. Bartlett (2006) Manifold regularization: a geometric framework for learning from examples. Journal of Machine Learning Research 7 (1), pp. 2399–2434. Cited by: §1.
  4. M. Belkin, P. Niyogi and V. Sindhwani (2005) On manifold regularization.. In AISTATS, pp. 1. Cited by: §1.
  5. O. Chapelle, B. Scholkopf and A. Zien (2009) Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews]. IEEE Transactions on Neural Networks 20 (3), pp. 542–542. Cited by: §1.
  6. T. F. Cox and M. A. Cox (2000) Multidimensional scaling. Chapman and hall/CRC. Cited by: §3.1, Table 1.
  7. Z. Dai, Z. Yang, F. Yang, W. W. Cohen and R. R. Salakhutdinov (2017) Good semi-supervised learning that requires a bad gan. In Advances in neural information processing systems, pp. 6510–6520. Cited by: §1.
  8. E. Hoffer and N. Ailon (2016) Semi-supervised deep learning by metric embedding. arXiv preprint arXiv:1611.01449. Cited by: §1.
  9. Y. Jian, Z. David, Y. Jing-Yu and N. Ben (2007) Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics. IEEE Transactions on Pattern Analysis & Machine Intelligence 29 (4), pp. 650–664. Cited by: §1, Figure 2, §2.1.
  10. D. P. Kingma, S. Mohamed, D. J. Rezende and M. Welling (2014) Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pp. 3581–3589. Cited by: §1.
  11. D. Lee (2013) Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, Vol. 3, pp. 2. Cited by: §1.
  12. B. Luo, R. C. Wilson and E. R. Hancock (2003) Spectral embedding of graphs. Pattern recognition 36 (10), pp. 2213–2230. Cited by: §3.1, Table 1.
  13. L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-sne. Journal of machine learning research 9 (Nov), pp. 2579–2605. Cited by: §3.1, Table 1.
  14. A. Rasmus, H. Valpola, M. Honkala, M. Berglund and T. Raiko (2015) Semi-supervised learning with ladder networks. pp. 3546–3554. Cited by: §1.
  15. J. T. Springenberg (2015) Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390. Cited by: §1.
  16. S. Thulasidasan and J. Bilmes (2016) Semi-supervised phone classification using deep neural networks and stochastic graph-based entropic regularization. arXiv preprint arXiv:1612.04899. Cited by: §1.
  17. E. Tu, L. Cao, J. Yang and N. Kasabov (2014) A novel graph-based k-means for nonlinear manifold clustering and representative selection. Neurocomputing 143, pp. 109–122. Cited by: §3.1.
  18. J. Weston and R. Collobert (2012) Deep learning via semi-supervised embedding. In International Conference on Machine Learning, Cited by: §1, §1, §1, §3.2, §3.2.
  19. H. Wu and S. Prasad (2018) Semi-supervised deep learning using pseudo labels for hyperspectral image classification. IEEE Transactions on Image Processing 27 (3), pp. 1259–1270. Cited by: §1.
  20. Z. Yang, W. W. Cohen and R. Salakhutdinov (2016) Revisiting semi-supervised learning with graph embeddings. arXiv preprint arXiv:1603.08861. Cited by: §1.
  21. X. Zhu and A. B. Goldberg (2009) Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning 3 (1), pp. 1–130. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402613
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description