Fast Support Vector Machines Using Parallel Adaptive Shrinking on Distributed Systems

Fast Support Vector Machines Using Parallel Adaptive Shrinking on Distributed Systems

Abstract

Support Vector Machines (SVM), a popular machine learning technique, has been applied to a wide range of domains such as science, finance, and social networks for supervised learning. Whether it is identifying high-risk patients by health-care professionals, or potential high-school students to enroll in college by school districts, SVMs can play a major role for social good. This paper undertakes the challenge of designing a scalable parallel SVM training algorithm for large scale systems, which includes commodity multi-core machines, tightly connected supercomputers and cloud computing systems. Intuitive techniques for improving the time-space complexity including adaptive elimination of samples for faster convergence and sparse format representation are proposed. Under sample elimination, several heuristics for earliest possible to lazy elimination of non-contributing samples are proposed. In several cases, where an early sample elimination might result in a false positive, low overhead mechanisms for reconstruction of key data structures are proposed. The algorithm and heuristics are implemented and evaluated on various publicly available datasets. Empirical evaluation shows up to 26x speed improvement on some datasets against the sequential baseline, when evaluated on multiple compute nodes, and an improvement in execution time up to 30-60% is readily observed on a number of other datasets against our parallel baseline.

1Introduction

Today, simulations and instruments produce exorbitant amounts of data and the rate of data production over the years is expected to grow dramatically [21]. Machine Learning and Data Mining (MLDM) provides algorithms and tools for knowledge extraction from large volumes of data. Several domains such as science, finance and social networks rely on MLDM algorithms for supervised and unsupervised learning [24]. Support Vector Machines (SVM) - a supervised learning algorithm - is ubiquitous due to excellent accuracy and obliviousness to dimensionality. SVM broadly relies on the idea of large margin data classification. It constructs a decision surface in the feature space that bisects the two categories and maximizes the margin of separation between classes of points used in the training set. This decision surface is used for classification on the testing set provided by the user. SVM has strong theoretical foundations, and the classification and regression algorithms provide excellent generalization performance [3].

With the increasing data volume and general availability of multi-core machines, several parallel SVM training algorithms are being proposed in the literature. PEGASOS [22] and dual coordinate descent [16] train on extremely large problems, albeit with limitations to linear SVMs. Cao et al. have proposed parallel solution extending the previously proposed Sequential Minimal Optimization (SMO) algorithm [18]. However, the empirical evaluation does not show good scalability and the entire dataset is used for training [4]. Other algorithms have been proposed for special architectural aspects such as GPUs [5]. A primary problem with the algorithms proposed above is that they use the complete dataset for margin generation during the entire calculation, even though only a fraction of samples (support vectors) contribute to the hyperplane calculation. Shrinking - a technique to eliminate non-contributing samples - has been proposed for sequential SVMs [17] to reduce the time complexity of training. However, no parallel shrinking algorithm for multi-core machines and distributed systems exists in literature.

This paper addresses the limitations of previously proposed approaches and provides a novel parallel SVM training algorithm with adaptive shrinking. We utilize the theoretical framework for shrinking in our parallel solution to improve on the speed of convergence and use a specific format for sample representation in the optimization based on the observation that most of the real world datasets are sparse in nature (see Section ?). We study the effect of several heuristics (Section ?) for aggressive to conservative elimination of non-contributing samples during the various stages of execution. The proposed approaches are designed and implemented uses state-of-the-art programming models such as Message Passing Interface (MPI) [15] and Global Arrays [19] for design of communication and data storage. These programming models are known to provide optimal performance on multi-core systems, large scale systems and can be used on cloud computing systems as well. An empirical evaluation of proposed approaches shows up to 3x speedup in comparison to the original non-elimination algorithms using the same number of processors, and up to 26x speedup in comparison to libsvm [6].

1.1Contributions

Specifically, this paper makes the following contributions:

  1. Design and analysis of parallel algorithms to improve the time complexity of SVM training including adaptive elimination of samples. Several heuristics under the categories of aggressive, average and conservative for elimination of non-contributing samples.

  2. Space-efficient SVM training algorithm by using compressed representation of data samples and avoiding the kernel cache. The proposed solution makes it an attractive approach for very large -scale datasets and modern systems.

  3. Implementation of our proposed algorithm and evaluation with several datasets on multi-core systems and large-scale tightly-connected supercomputers. The empirical evaluation indicates the efficacy of the proposed approach - 5x-8x speedup on USPS and Mushrooms datasets against the sequential baseline [6] and 20-60% improvement in execution time on several datasets against our parallel no-shrinking baseline algorithm.

The rest of the paper is organized as follows: Section 2 provides a background of our work. Section 3 presents a solution space of the algorithms and associated heuristics. Empirical evaluation and analysis is performed in Section 4, and Section 5 presents the related work. Section 6 presents conclusions and future directions.

2Background

Given training data points where and , we solve the standard two-category soft margin non-linear classification problem. Thus the problem of finding a maximal margin separating hyperplane in a high-dimensional space can be formulated as:

where is a regularization parameter which is a trade-off between the classifier generality and its accuracy on the training set, is a positive slack variable allowing noise in the training set and maps the input data to a possibly infinite dimensional space (i.e. ).

2.1SVM Training

This is a convex quadratic programming problem [3]. Introducing Lagrange multipliers and solving the Lagrangian of the primal to get the Wolfe dual [12], the following formulation is observed:

subject to:

Minimizing the primal Lagrangian provides the following formulation:

SVM training is achieved by a search through the feasible region of the dual problem and maximization of the objective function , with the Karush-Kuhn-Tucker (KKT) conditions [3] in identifying the optimal solution. We refer the reader to [9] for the full theoretical treatment on the SVMs and training. Samples with are referred as support vectors, (Table ?). The support vectors contribute to the definition of the optimal separating hyperplane - other examples can be removed from the dataset. The solution of SV training is given by . A new point can be classified with:

2.2Sequential Minimal Optimization (SMO)

SVM training by solving the dual problem is typically conducted by splitting a large optimization problem into a series of smaller sub-problems [17]. The SMO algorithm [20] uses precisely two samples at each optimization step while solving . This facilitates the generation of an analytical solution possible for the quadratic minimization at each step because of the equality constraint in . The avoidance of dependencies on numerical optimization packages makes this algorithm a popular choice [6] in SVM training, resulting in simplified design and reduced susceptibility to numerical issues [20].

Gradient updates

Several data structures are maintained during the SMO training [20]. An essential data structure, , is described as follows:

The relationship between and the gradient of is shown in Table ?. For the rest of the paper, and gradient are used interchangeably. In all the algorithms proposed in this work, the key component in the gradient of the dual objective function , , is maintained for all the samples in the training set/non-shrunk samples and not just the recently optimized samples at a given iteration for reasons explained in Section 3.4.

The update equation is shown below:

where

Working Set Selection

The Working set selection describes the selection of samples to be evaluated at each step of the algorithm. Since we work with the first derivative of (refer to ), this working set selection is addressed as first-order heuristics. Keerthi et al. have proposed multiple possibilities [18]. Algorithm one iterates over all examples in and the second approach only evaluates the worst KKT violators, and , where at each step, they are calculated as shown in . Of these, we have adapted the second modification, and instead of having two loops, we operate only in the innermost loop, avoiding the first costly loop that examines all examples.

We do not compromise on the accuracy of the solution because we select the pair of indices based on and not just using as done in [18] where is a recently optimized pair and of the nature of our updates.

These values are the two threshold parameters discussed in the optimized version [18]. The optimality condition for termination of the algorithm (considering numerical issues) is

where is a user-specified tolerance parameter.

It can be seen from , and , that the worst violators are gathered by considering all the samples, not just the recently optimized ones and the non-bound samples (i.e., ) for the next iteration.

Adaptive Elimination/Shrinking

Shrinking is a mechanism to expedite the convergence of SVM training phase by eliminating the samples, which would not contribute to the hyperplane [17]. With and defined as in , samples may be eliminated if they satisfy the following decision rule:

This heuristic is explained in the Figure ?. The eliminated samples belong to one of the two classes: ones that have and those with .

2.3Programming Models

This paper uses two programming models - MPI [15] and Global Arrays [19] for designing scalable SMO on distributed systems. Due to space limitations, we provide a brief background of Global Arrays and suggest other literature for MPI [15].

Global Arrays

The Global Arrays programming model provides abstractions for distributed arrays, load/store semantics for local partition of the distributed arrays, and one-sided communication to the remote partitions. Global Arrays leverages the communication primitives provided by Communication Runtime for Exascale (ComEx) [27]. Global Arrays programming model has been used for designing many scalable applications in domains such as chemistry [25] and sub-surface modeling [23]. The Global Arrays infrastructure is useful in storing the entire dataset in a compressed row format. The easy access to local and remote portions of distributed arrays facilitates a design of algorithms which would need asynchronous read/write access to the arrays. Global Arrays uses ComEx network communication layer for one-sided communication.

3Solution Space

Representative notations used and their explanation
Name Symbol
# of Processors
# of Training Points
Class label
Lagrange multiplier
Set of Support Vectors
Working set
,
Hyperplane threshold
Sample in CSR form
Indices set in
User-Specified Tolerance
Avg time
Row-Pointer Array
Average sample length
Network Latency
Network Bandwidth

This section begins with a presentation of various steps of the sequential SVM training algorithm ?, which is followed by a discussion of the data structures organization using the parallel programming models. Section 3.2 introduces the parallel training algorithm of the Original algorithm, and presents its time-space complexity. This is a followed by a discussion and analysis of multiple parallel shrinking algorithms Section 3.4

 (): Shrunk ids when  is not satisfied (non-optimality). Refer to  for the shrinking criterion. (): Memory Space Conservation with CSR (): \breve{x}: An extended sample prototype in CSR representation. Refer to Table  for the meaning of the symbols.
(): Shrunk ids when is not satisfied (non-optimality). Refer to for the shrinking criterion. (): Memory Space Conservation with CSR (): : An extended sample prototype in CSR representation. Refer to Table for the meaning of the symbols.

 (): Shrunk ids when  is not satisfied (non-optimality). Refer to  for the shrinking criterion. (): Memory Space Conservation with CSR (): \breve{x}: An extended sample prototype in CSR representation. Refer to Table  for the meaning of the symbols.
(): Shrunk ids when is not satisfied (non-optimality). Refer to for the shrinking criterion. (): Memory Space Conservation with CSR (): : An extended sample prototype in CSR representation. Refer to Table for the meaning of the symbols.
 (): Shrunk ids when  is not satisfied (non-optimality). Refer to  for the shrinking criterion. (): Memory Space Conservation with CSR (): \breve{x}: An extended sample prototype in CSR representation. Refer to Table  for the meaning of the symbols.
(): Shrunk ids when is not satisfied (non-optimality). Refer to for the shrinking criterion. (): Memory Space Conservation with CSR (): : An extended sample prototype in CSR representation. Refer to Table for the meaning of the symbols.

3.1Preliminaries

Algorithm ? shows the key steps of our sequential SVM algorithm. This is used as a basis for designing parallel SVM algorithms, with (Algorithms ? and its variant) and without (Algorithm ?) shrinking. Using Table ? as reference, at each iteration, , is calculated based on . In most cases, the objective function is positive definite () Equation 12, which is used as the basis for update. An approach proposed by Platt et al. [20] can be used for the update equations, when .

where

Once is satisfied, in is calculated as:

Distributed Data Structures

There are several data structures required by algorithms ? and ? , which need to be distributed across different compute nodes. These data structures include the for the input dataset, for the sample label, for the Lagrange multipliers, and . As presented earlier, the computation can be re-formulated to using a series of kernel calculations. The individual kernel calculations may be stored in a kernel cache, which itself can be distributed among different processes.

However, there are several reasons for avoiding kernel cache for large scale systems. The space complexity of complete kernel cache is , which is prohibitive for large inputs - a primary target of this paper. At the same time, the temporal/spatial reuse of individual rows of kernel cache is low as the and typically do not exhibit a temporal/spatial pattern. As the architectural trends exhibit, the available memory per computation unit (such as a core in a multi-core unit) is decreasing rapidly, and it is expected that simple compute units such as Intel Xeon Phi would be commonplace [2], while Graphics Processors are already ubiquitous. At the same time, these compute units provide hardware support for wide-vector instructions, such as fused multiply-add. As a result, the cost of recomputation is expected to be much lower than either caching the complete kernel matrix or conducting off-chip/off-node data movement to get the individual rows of kernel matrix distributed across multiple nodes. Hence, the proposed approaches in this paper avoid the kernel cache altogether.

Data Structure Organization

The organization of distributed data structures plays a critical role in reducing time and space complexity of algorithm ?. Most datasets are sparse in nature, with several datasets having less than 20% density. Figure ? shows the percentage of memory space conserved when using a compressed sparse row (CSR) [11] representation. As shown in Figure ?, co-locating the algorithmic related data structures and making the column indices as part of representation makes little change in the density and the reduction in space complexity outweighs the additional bookkeeping for the boundaries of various samples.

The core steps of computation requires several kernel calculations and frequent access to the data structures such as the , , and . Among these, is a read-only data structure, while other data structures are read-write. The organization of these data structures with has a significant potential of improving the cache-hit rate of the system, by leveraging the spatial locality. Although a few of these data structures are read-write, the write-back nature of the caches on modern systems make them a better design choice in comparison to the individual data structures distributed across multiple processes in the job. An additional advantage of co-location of these data structures with is that load balancing among processes is feasible which requires contiguous data movement of samples, instead of several individual data structures.

For the proposed approaches, CSR is implemented using Global Arrays [19] programming model. Global Arrays provides semantics for collective creation of a compressed row, facilitating productive use of PGAS models for algorithms ? and ?. GA provides Remote Memory Access semantics for traditional Ethernet based interconnects and Remote Direct Memory Semantics (RDMA), making it effective for distributed systems such as based on Cloud and tightly-connected supercomputers.

Algorithm ? shows the pseudocode for the inner product calculation - the most frequently executed portion in our implementations. While the CSR representation is not conducive for using hardware based vector instructions, we leave the use of such vector units as a future work. The primary objective in this paper is to minimize the space complexity by using CSR representation not used by other papers such as [5]. The inner product is used as a constituent value in (line ? in algorithm ?) calculation by using a simple linear algebra trick.

3.2Parallel SVM training Algorithms

This section lays out the parallel algorithms - with shrinking ? and without shrinking ?. It also presents the reasoning behind shrinking and the conditions which must be true at the point of shrinking.

Parallel algorithm

Algorithm ? is a parallel no-shrinking variant of the sequential algorithm ?. This algorithm is also referred as Original in various sections of this paper. There are several steps in this algorithm. Each process receives from a default process using the MPI broadcast primitive, which is a scalable logarithmic operation in the number of processes. Each process independently calculates the new corresponding to and . This results in a time complexity of for network communication and three kernel calculations (ignoring other integer based calculation).

The for-loop over all samples for a process is the predominantly expensive part of the calculation. Each iteration requires the calculation of the , which involves several kernel calculations, the calculation and update of the global values of or , if they are locally owned by the process. The GAput operation (line ?) updates only the indices, which were updated during the calculation, reducing the overall communication cost. The computation cost of this step is . For a sufficiently large , the calculation and the communication cost to update the global copy of can be ignored. The last step of the algorithm is to obtain the globally maximum and minimum of , and , respectively. This is designed using MPI Allreduction operation which has a time-complexity of (The bandwidth term can be ignored, since this step involves a communication of only two scalars).

3.3Shrinking Algorithms

Joachims et al. [17] and Lin et al. [6] have previously demonstrated the impact of adaptive elimination of samples - shrinking. This technique is a heuristic, since the sufficient conditions to identify the samples to be eliminated are unknown [17]. For the eliminated samples, the Lagrange multipliers are kept fixed and they are not considered during the working set selection and the check for optimality. This results in time-complexity reduction, since the gradient for eliminated samples is not computed. The primary intuition behind shrinking is that only a small subset of samples contributes towards hyperplane definition:

It is expected that when the optimization is at the early stage, some of the bound samples stabilize [17].

where is the set of violators from where working set variables are chosen and one or more samples from the set can be eliminated without changing the current solution. Specifically, presents a variant of the condition proposed previously by Lin et al. for shrinking. The overhead of calculating which samples to shrink is expected to be , since the computation only involves a few conditions.

However, there are several problems with this assumption. It is possible that samples with - which were previously eliminated - eventually stabilize to a value between and . A premature elimination of these samples may result in the incorrect definition of hyperplane. A conservative approach to decide on the execution of this condition may not be beneficial, since much of the calculation would likely have completed. In essence, it is very difficult to predict the point at which to execute this condition. Lin et al. have proposed to use iterations as the point to perform shrinking. However, there is no intuitive reasoning behind selecting a value to begin or executed shrinking. A discussion on spectrum of heuristics for shrinking is presented in the next section.

3.4Gradient Reconstruction

Gradient reconstruction is an important step in ensuring that the previously eliminated samples are not false positives and that they are on the correct side of the hyperplane in the final solution. Algorithm ? shows the key steps involved in updating values during the gradient reconstruction step of the algorithm ?. The algorithm ? corresponds to shrinking with single gradient-reconstruction. An algorithm, which corresponds to multiple gradient reconstruction (Refer to Table ?) can be derived from this. However, due to lack of space, it is not presented explicitly.

Algorithm ? finds the values of all the eliminated samples from the previous gradient reconstruction. To achieve this, it needs , which results in the communication of samples owned by each process. The time complexity of this step is . The communication cost may be non-negligible for distributed systems, hence it is necessary to consider heuristics which limit the execution of gradient synchronization step. Also evident from the loop structure is the fact that the outer loop considers all eliminated samples of the -th CPU and updates their gradient values. This is a computationally expensive operation since line ? involves kernel calculations ( from Section 2.2), so this algorithm is called only when global violators are within a specific threshold (e.g., lines ? and ? in Algorithm ?). Since plays an important role in both the updates and working set selection (Section ?), we maintain it for all the active samples throughout the program execution.

Considering a less-noisy dataset, and on an average, . Then, is small if not and the computational time complexity expected for -th CPU for Algorithm ? is . The tradeoff between and is clear making this essential algorithm a bottleneck in achieving the overall speedup in convergence. As a result, we have considered single and multi heuristics for -reconstruction as shown in Figure ?.

4Empirical Evaluation

This section provides an empirical evaluation of the proposed approaches in the previous section. The empirical evaluation is conducted across multiple dimensions: datasets, number of processes, shrinking/no-shrinking, heuristics for selection of shrinking steps. The performance evaluation uses up to 512 processes (32 compute nodes), and several datasets use between 1 and 32 compute nodes. As a result, the proposed approaches can be used on multi-core machines such as a desktop, supercomputers or cloud computing systems. For each dataset, we compare our results with LIBSVM [6], version 3.17, with shrinking enabled.

The upcoming sections provide a brief description of the datasets, experimental testbed and followed by empirical results. Due to accessibility limitations, the performance evaluation is conducted on a tightly connected supercomputer, although the generality of our proposed solution makes it effective for cloud computing systems as well.

Dataset Characteristics and hyperparameter settings
Name Training Set Size Testing Set Size C
MNIST 60000 10000 10 25
Adult-7 (a7a) 16100 16461 32 64
Adult-9 (a9a) 32561 16281 32 64
USPS 7291 2007 8 16
Mushrooms 8124 N/A 8 64
Web (w7a) 24692 25057 32 64
IJCNN 49990 91701 0.5 1

4.1Datasets

Table ? provides a description of the datasets used for performance evaluation in this paper. The MNIST1 dataset represents images of handwritten digits. The dimensions are formed by flattening the 28x28 pixel box into one-dimensional array of floating point values between 0 and 1, with 0 representing black and 1 white. The 10-class dataset is converted into a two-class one by representing even digits as class -1 and odd digits as +1. The sparse binary Adult dataset represents the collected census data for income prediction. Web dataset is used to categorize web pages based on their text [20]. USPS represents a collection of handwritten text recognition, collected by United States Postal Service. The mushrooms data set includes descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family. The IJCNN dataset represents the first problem of International Joint Conference on Neural Nets challenge 2001. Hyperparameter settings for the datasets have been selected after doing multi-fold cross-validation [6]. These are shown in Table ?. The hyperparameter is described in Section 2.1 and is the kernel width in the Gaussian kernel: . It is straightforward to use other kernels in this work.

4.2Experimental Testbed

All our experiments were run on PNNL Institutional Computing (PIC) cluster 2. PIC Cluster consists of 692 dual-socket nodes with 16 cores per socket AMD Interlagos processors, running at 2.1 GHz with 64 GB of 1600 MHz memory per node (2 GB/socket). The nodes are connected using InfiniBand QDR interconnection network. The empirical evaluation consists of a mix of results on single node (multi-core) and multiple nodes (distributed system). While the performance evaluation is on a tightly-connected system, most modern Cloud providers provide programming models such as MPI and Global Arrays, hence this solution is deployable on them as well.

4.3Heuristics: An overview

Table ? provides a list of heuristics, which are used for evaluation on datasets presented in the previous section. Specific values to aggressive, conservative and average methods for shrinking are provided. To emulate the heuristics evaluated by Lin et al., we also compare several values of random sample elimination as suggested in the Table. Line in the table is to be interpreted as shrinking every 2 iterations(aggressive), with a single call to gradient reconstruction. Optimization proceeds without shrinking after this call. Similarly, line can be read as shrinking whenever the number of iterations reach half the number of samples (conservative) with multiple calls to reconstruction as deemed fit and optimization proceeds with shrinking throughout until convergence.

Heuristics. Description and classification. : Aggressive shrinking class, : Conservative, : Average
# Shrinking Type -Recon. Name Class
None N/A Original N/A
random: 2 Single Single2
random: 500 Single Single500
random: 1000 Single Single1000
numsamples: 5% Single Single5pc
numsamples: 10% Single Single10pc
numsamples: 50% Single Single50pc
random: 2 Multi Multi2
random: 500 Multi Multi500
random: 1000 Multi Multi1000
numsamples: 5% Multi Multi5pc
numsamples: 10% Multi Multi10pc
numsamples: 50% Multi Multi50pc
Default Default LIBSVM N/A

Figure 1: Adult (a9a) Dataset Performance
Figure 1: Adult (a9a) Dataset Performance
Figure 2: Adult (a9a) Dataset Performance
Figure 2: Adult (a9a) Dataset Performance
Figure 3: USPS Dataset Performance
Figure 3: USPS Dataset Performance
Figure 4: USPS Dataset Performance
Figure 4: USPS Dataset Performance
Figure 5: Mushroom Dataset Performance
Figure 5: Mushroom Dataset Performance
Figure 6: Mushroom Dataset Performance
Figure 6: Mushroom Dataset Performance
Figure 7: Summary of Results (Testing Accuracy and Relative Speedups between our best performing heuristic with the Original Implementation  and LIBSVM)
Figure 7: Summary of Results (Testing Accuracy and Relative Speedups between our best performing heuristic with the Original Implementation and LIBSVM)

4.4Results and Analysis

Figures ? and Figure 2 show the results for Adult-7 and Adult-9 datasets, respectively. A speedup of 2x is observed on Adult-7 dataset using the Multi500 and Multi5pc heuristics in comparison to Original algorithm, and 3-3.5x in comparison to LIBSVM. Among all the approaches, Multi2 has the highest time in Reconstruction (referred as Recon-Time in the figures), largely because it eliminates samples prematurely, while other heuristics allow the values to stabilize before elimination. It is worthwhile noting that each of the Multi* heuristics are better than Single shrinking for these datasets. For each of the adult datasets, , which is a suitable condition for shrinking. Since the proposed heuristics are precise, the accuracy and time for classification for each of these datasets is similar, and only a representative information is shown in Table Figure 7. It is also worthwhile noting that the implementation of the original algorithm is near optimal, as it scales well with increasing number of processes.

The results for USPS dataset, as shown in Figure 4 show the efficacy of highly aggressive Multi2 heuristic, with Multi5pc being the second best. These results validate our premise, that is typically small, and a multiple 5% heuristic such as Multi5pc can provide significant elimination of computation for SVM, resulting in faster convergence. As discussed previously, the first reconstruction is executed at , while others are executed . However, with Multi* heuristics, the number of times the gradient is reconstructed at the terminating condition can be predicted apriori. As shown in the USPS results, each of the Multi* shrinking heuristics, although spend significantly more time in gradient reconstruction( ?), still reduce the overall execution time. For USPS dataset, an overall speedup of 1.7x is observed in comparison to Original implementation, and 5x in comparison to LIBSVM.

Figure ? shows the performance of various approaches on MNIST dataset using 256 and 512 processes - equivalent of 16 and 32 compute nodes. There are several take away messages - the original implementation scales well providing about 1.2x speedup or 90% efficiency. Several Multi* heuristics perform very well, with little difference in execution time among them. For 256 processes, a speedup of more than 1.3x is achieved using Multi1000 approach in comparison to the original approach and 26x over LIBSVM. The fact that more time is spent in reconstruction is outweighed by the overall reduction in the training time. Similar trends are observed with the w7a dataset shown in Figure ?, where 2.3x speedup is observed on 32 processes and 1.6x speedup is observed on 64 processes, with up to 3.3x speedup in comparison to LIBSVM.

Figure 6 shows the performance on the Mushrooms dataset. In comparison to other datasets Mushrooms dataset requires significantly more relative time for training due to the higher values of and . As a result, the reconstruction time is relatively small. are support vectors, which is less than 5% of the overall training set. Here as well, Multi5pc provides near-optimal performance resulting in 3x speedup, while Multi2 is slightly better than that. Again, it is fair to conclude that Multi5pc is a good heuristic in extracting the benefits of shrinking. Up to 8.2x improvement is observed in comparison to LIBSVM.

Figure ? shows the performance of IJCNN data set. We have used this as an example to indicate that shrinking is not beneficial for all datasets and different setting of hyperparameters. For several datasets, we have observed that higher values of hyperparameters results in faster elimination of samples, potentially providing benefits of shrinking. This opens up a new avenue for research where shrinking is integrated in the cross-validation step to get parameters suitable for both shrinking and better generalization. As shown in the figure, the original implementation is the best, while each of the other approaches result in significant degradation due to shrinking. However, in comparison to LIBSVM, a speedup of up to 9x is observed with the Original implementation.

5Related Work

We discuss SVM training algorithms in literature under two major branches of study: the sample selection methods and parallel algorithms.

5.1Sample Selection

Multiple researchers have proposed algorithms for selection of samples, which can be be used for faster convergence. Active set methods solve the dual optimization problem by considering a part of the dataset in a given iteration until global convergence [20]. The primary approach is to decompose large Quadratic Programming tasks into small ones. Other approaches include the reformulation of the optimization problem, which does not require the decomposition [31]. The seminal SMO [20] and SVM [17] are active set sequential methods and SVM-GPU [5], and PSMO [4] are examples of parallel decomposition methods whereas Woodsend et al.[29] is an example of a parallel non-decomposition solution. A primary problem with the working set methods is the inability to address noisy, non-separable datasets [30]. However, the simplicity, ease of implementation and strong convergence properties make them an attractive choice for solving large-scale classification problems. Other researchers have considered different values () of the working set [8].

5.2Parallel Algorithms

With the advent of multi-core systems and cluster computing, several parallel and distributed algorithms have been proposed in literature. This section provides a brief overview of these algorithms.

Architecture specific solutions such as GPUs [5] have been proposed, and other approaches require a special cluster setup [14]. Graf et al. have proposed Cascade SVM [14], which provides a parallel solution to the dual optimization problem. The primary approach is to divide the original problem in completely independent sub-problems, and recursively combine the independent solutions to obtain the final set of support vectors. However, this approach suffers from load imbalance, since many processes may finish their individual sub-problem before others. As a result, this approach does not scale well for very large scale processes - a primary target of our approach.

The advent of SIMD architectures such as GPUs has resulted in research conducted for Support Vector Machines on GPUs [5]. Under this approach, a thread is created for each data point in the training set and the MapReduce paradigm is used for compute-intensive steps. The primary approach proposed in this paper is suitable for large scale systems, and not restricted to GPUs.

Several researchers have proposed alternative mechanisms for solving QP problems. An example of variable projection method is proposed by Zanghirati and Zanni [32]. They use an iterative solver for QP problems leveraging the decomposition strategy of SVM [17]. Chang et al. [7] have also considered more than 2 active set size and solves the problem using Incomplete Cholesky Factorization and Interior Point method (IPM). Woodsend et al. [29] have proposed parallelization of linear SVM using IPM and a combination of MPI and OpenMP. However, their approach is not an active set method, as it does not decompose a large problem into smaller ones. There are approaches like [22] that solve the primal problem for linear SVMs for very large problems, but the primary objective of this paper is to scale the most popular 2-working set based methods due to their ubiquity.

As evident from the literature study above, none of the previously proposed approaches use adaptive elimination of samples on large scale systems, which has a significant potential in reducing the execution time for several datasets.

6Conclusions and Future Work

This paper has endeavored to address the limitations of previously proposed approaches and provided a novel parallel Support Vector Machine algorithm with adaptive shrinking. It explored various design aspects of the algorithm and the associated implementation, such as space complexity reduction by using sparse data structures, intuitive heuristics for adaptive shrinking of samples, and adaptive reconstitution of the data structures. We have used state-of-the-art programming models such as Message Passing Interface (MPI) [15] and Global Arrays [26] for the design of communication and data storage in the implementations. Empirical evaluation has demonstrated the efficacy of our proposed algorithm and the heuristics.

The future work involves shrinking with second order heuristics for working set selection, with deeper evaluation of heuristics and, considering other algorithms and working set sizes for faster elimination of samples. It will also be interesting to study shrinking under different architectures like GPUs. Though the proposed approach does complete elimination of kernel cache, it is possible to use deep memory hierarchy for keeping active portions of the kernel cache. The future work would also involve optimizations on upcoming architectures such as Intel MIC architecture, and AMD Fusion APU architecture.

Footnotes

  1. deeplearning.net/data
  2. pic.pnnl.gov/resources.stm

References

  1. Machine learning based load-balancing for the cesm climate modeling package.
    P. Balaprakash, Y. Alexeev, S. A. Mickelson, S. Leyffer, R. L. Jacob, and A. P. Craig. 2013.
  2. Exascale computing study: Technology challenges in achieving exascale systems peter kogge, editor and study lead, 2008.
    K. Bergman, S. Borkar, D. Campbell, W. Carlson, W. Dally, M. Denneau, P. Franzon, W. Harrod, J. Hiller, S. Karp, S. Keckler, D. Klein, R. Lucas, M. Richards, A. Scarpelli, S. Scott, A. Snavely, T. Sterling, R. S. Williams, K. Yelick, K. Bergman, S. Borkar, D. Campbell, W. Carlson, W. Dally, M. Denneau, P. Franzon, W. Harrod, J. Hiller, S. Keckler, D. Klein, P. Kogge, R. S. Williams, and K. Yelick.
  3. A tutorial on support vector machines for pattern recognition.
    C. J. C. Burges. Data Min. Knowl. Discov., 2:121–167, June 1998.
  4. Parallel sequential minimal optimization for the training of support vector machines.
    L. J. Cao, S. S. Keerthi, C.-J. Ong, J. Q. Zhang, U. Periyathamby, X. J. Fu, and H. P. Lee. IEEE Transactions on Neural Networks, 17(4):1039–1049, July 2006.
  5. Fast support vector machine training and classification on graphics processors.
    B. Catanzaro, N. Sundaram, and K. Keutzer. In Proceedings of the 25th international conference on Machine Learning, ICML ’08, pages 104–111. ACM, 2008.
  6. LIBSVM: A library for support vector machines.
    C.-C. Chang and C.-J. Lin. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011.
  7. Psvm: Parallelizing support vector machines on distributed computers.
    E. Y. Chang, K. Zhu, H. Wang, H. Bai, J. Li, Z. Qiu, and H. Cui. In NIPS, 2007.
  8. A GPU-tailored approach for training kernelized SVMs.
    A. Cotter, N. Srebro, and J. Keshet. In Proceedings of the 17th ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’11, pages 805–813, 2011.
  9. An introduction to support vector machines: and other kernel-based learning methods.
    N. Cristianini and J. Shawe-Taylor. Cambridge University Press, 2000.
  10. Synergistic Challenges in Data-Intensive Science and Exascale Computing, 2013.
    DOE ASCAC Subcommittee.
  11. Sparse matrix storage formats.
    J. Dongarra. In Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors, Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. SIAM, Philadelphia, 2000.
  12. Practical methods of optimization; (2nd ed.).
    R. Fletcher. Wiley-Interscience, New York, NY, USA, 1987.
  13. MPI-2: Extending the message-passing interface.
    A. Geist, W. Gropp, S. Huss-Lederman, A. Lumsdaine, E. L. Lusk, W. Saphir, T. Skjellum, and M. Snir. In Euro-Par, Vol. I, pages 128–135, 1996.
  14. Parallel support vector machines: The cascade svm.
    H. P. Graf, E. Cosatto, L. Bottou, I. Durdanovic, and V. Vapnik. In Advances in Neural Information Processing Systems, pages 521–528. MIT Press, 2005.
  15. A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard.
    W. Gropp, E. Lusk, N. Doss, and A. Skjellum. Parallel Computing, 22(6):789–828, 1996.
  16. A dual coordinate descent method for large-scale linear svm.
    C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. S. Keerthi, and S. Sundararajan. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 408–415, New York, NY, USA, 2008. ACM.
  17. Making large-scale support vector machine learning practical.
    T. Joachims. In B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in kernel methods, pages 169–184. MIT Press, 1999.
  18. Improvements to platt’s smo algorithm for svm classifier design.
    S. S. Keerthi, S. K. Shevade, C. Bhattacharyya, and K.R.K.Murthy. Neural Computation, 13(3):637–649, 2001.
  19. Global Arrays: A Nonuniform Memory Access Programming Model for High-Performance Computers.
    J. Nieplocha, R. J. Harrison, and R. J. Littlefield. Journal of Supercomputing, 10(2):169–189, 1996.
  20. Fast training of support vector machines using sequential minimal optimization.
    J. C. Platt. In Advances in kernel methods: support vector learning, pages 185–208, Cambridge, MA, USA, 1999. MIT Press.
  21. Scientific Discovery at the Exascale, 2011.
    Report from the DOE ASCR 2011 Workshop on Exascale Data Management, Analysis, and Visualization.
  22. Pegasos: Primal estimated sub-gradient solver for svm.
    S. Shalev-Shwartz, Y. Singer, and N. Srebro. In Proceedings of the 24th International Conference on Machine Learning, ICML ’07, pages 807–814, New York, NY, USA, 2007. ACM.
  23. STOMP.
    Subsurface Transport over Multiple Phases. http://stomp.pnl.gov/.
  24. Machine learning and its applications to biology.
    A. L. Tarca, V. J. Carey, X.-w. Chen, R. Romero, and S. Drăghici. PLoS Comput Biol, 3(6):e116, 06 2007.
  25. Nwchem: A comprehensive and scalable open-source solution for large scale molecular simulations.
    M. Valiev, E. Bylaska, N. Govind, K. Kowalski, T. Straatsma, H. V. Dam, D. Wang, J. Nieplocha, E. Apra, T. Windus, and W. de Jong. Computer Physics Communications, 181(9):1477 – 1489, 2010.
  26. Scalable PGAS Communication Subsystem on Cray Gemini Interconnect.
    A. Vishnu, J. Daily, and B. Palmer. Pune, India, 2012. HiPC.
  27. Designing scalable pgas communication subsystems on blue gene/q.
    A. Vishnu, D. J. Kerbyson, K. Barker, and H. J. J. V. Dam. Boston, 2013. 3rd Workshop on Communication Architecture for Scalable Systems.
  28. Support vector machines in high-energy physics.
    A. Vossen. 2008.
  29. Hybrid mpi/openmp parallel linear support vector machine training.
    K. Woodsend and J. Gondzio. J. Mach. Learn. Res., 10:1937–1953, Dec. 2009.
  30. A parallel training algorithm for large scale support vector machines.
    E. Yom-tov. Neural Information Processing Systems Workshop on Large Scale Kernel Machines, 2004.
  31. RSVM: Reduced support vector machines.
    L. Yuh-jye and O. L. Mangasarian. Technical Report 00–07, Data Mining Institute, Computer Sciences Department, University of Wisconsin, 2001.
  32. A parallel solver for large quadratic programs in training support vector machines.
    G. Zanghirati and L. Zanni. Parallel Computing, 29(4):535–551, 2003.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10527
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description