On Linear Learning with Manycore Processors

On Linear Learning with Manycore Processors

Eliza Wszola ETH Zurich
Zurich, Switzerland
   Celestine Mendler-Dünner IBM Research
Zurich, Switzerland
   Martin Jaggi EPFL
Lausanne, Switzerland
   Markus Püschel ETH Zurich
Zurich, Switzerland

A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator. In this paper we target the efficient training of generalized linear models on these machines. We propose a novel approach for achieving parallelism which we call Heterogeneous Tasks on Homogeneous Cores (HTHC). It divides the problem into multiple fundamentally different tasks, which themselves are parallelized. For evaluation, we design a detailed, architecture-cognizant implementation of our scheme on a recent 72-core Knights Landing processor that is adaptive to the cache, memory, and core structure. Our library efficiently supports dense and sparse datasets as well as 4-bit quantized data for further possible gains in performance. We show benchmarks for Lasso and SVM with different data sets against straightforward parallel implementations and prior software. In particular, for Lasso on dense data, we improve the state-of-the-art by an order of magnitude.

Manycore, performance, machine learning, coordinate descent, GLM, SVM, Lasso

I Introduction

The evolution of mainstream computing systems has moved from the multicore to the manycore area. This means that a few dozen to even hundreds of cores are provided on a single chip, packaged with up to hundreds of gigabytes of memory at high bandwidth. Examples include Intel Xeon Phi (up to 72 cores), ARM ThunderX2 (64 cores), Qualcomm Centriq 2400 (48 cores), and of course GPUs (100s of cores). One declared target of the recent generation of manycores is machine learning. While much work has been devoted to efficient learning and inference of neural nets on GPUs, e. g. [1, 2], other domains of machine learning and manycores have received less attention.

One exciting trend in manycore is the move from accelerators (like GPUs) to standalone manycore processors. These remove the burden of writing two types of code and enable easier integration with applications and legacy code. However, the efficient mapping of the required mathematics to manycores is a difficult task as compilers have inherent limitations to perform it given straightforward C (or worse, Java, Python, etc.) code, a problem that has been known already for the earlier, simpler multicore and single core systems [3]. Challenges include vector instruction sets, deep cache hierarchies, non-uniform memory architectures, and efficient parallelization.

The challenge we address in this paper is how to map machine learning workloads to manycore processors. We focus on recent standalone manycores and the important task of training generalized linear models used for regression, classification, and feature selection. Our core contribution is to show that in contrast to prior approaches, which assign the same kind of subtask to each core, we can often achieve significantly better overall performance and adaptivity to the system resources, by distinguishing between two fundamentally different tasks. A subset of the cores will be assigned a task that only reads the model parameters, while the other subset of cores will perform a task that updates them. So in the manycore setting, while the cores are homogeneous, we show that assigning them heterogeneous tasks results in improved performance and use of compute, memory, and cache resources. The adaptivity of our approach is particularly crucial; the number and assignment of threads can be adapted to the computing platform and the problem at hand.

We make the following contributions:

  1. We describe a novel scheme, consisting of two heterogeneous tasks, to train generalized linear models on homogeneous manycore systems. We call it Heterogeneous Tasks on Homogeneous Cores (HTHC).

  2. We provide a complete, performance-optimized implementation of HTHC on a 72-core Intel Xeon Phi processor. Our library supports both sparse and dense data as well as data quantized to 4 bits for further possible gains in performance. Our code will be made publicly available.

  3. We present a model for choosing the best distribution of threads for each task with respect to the machine’s memory system. We demonstrate that with explicit control over parallelism our approach provides an order of magnitude speedup over a straightforward OpenMP implementation.

  4. We show benchmarks for Lasso and SVM with different data sets against straightforward parallel implementations and prior software. In particular, for Lasso on dense data, we improve the state-of-the-art by an order of magnitude.

Ii Problem Statement & Background

This section details the considered problem class, provides necessary background on coordinate selection and introduces our target platform: the Intel Knights Landing (KNL) manycore processor.

Ii-a Problem specification

We focus on the training of generalized linear models (GLMs) which can be expressed as the following optimization task:


where , and are convex functions, and is the model to be learned from the training data matrix with columns . In addition, is assumed to be smooth. This general setup covers many widely applied machine learning models including logistic regression, support vector machines (SVM), and sparse models such as Lasso and elastic-net.

The models of this form have two important characteristics. First, since is separable, it is possible to perform updates on different column of the data matrix independently. In particular, we can use stochastic coordinate descent (SCD) to process the data set coordinate-wise. This approach can be extended to batches. Second, importance measures for individual coordinates are available. Such measures depend either on the dataset, the current model parameters or both. They can be used for the selection of important coordinates during coordinate descent, speeding up overall convergence, either by deriving sampling probabilities (importance sampling) or by simply picking the parameters with the highest importance score (greedy approach).

Ii-B Duality-gap based coordinate selection

A particular measure of coordinate-wise importance that we will adopt in our method, is the coordinate-wise duality gap certificate proposed by Dünner et al. [4]. The authors have shown that choosing model parameters to update based on their contribution to the duality gap provides faster convergence than random selection and classical importance sampling [5].

Let denote the convex conjugates of . Then, the duality gap (see [6]) of our objective (1) can be determined as


where the model vector are related through the primal-dual mapping . Importantly, knowing the parameters , it is possible to calculate the duality gap values (2) for every independently and thus in parallel. In our implementation, we introduce the shared vector from which can be computed using a simple linear transformation for many problems of interest.

Ii-C The Knights Landing architecture

Intel Knights Landing (KNL) is a manycore processor architecture used in the second generation Intel Xeon Phi devices, the first host processors, i.e., not external accelerators, offered in this line. It provides both high performance (with machine learning as one declared target) and x86 backwards compatibility. A KNL processor consists of 64–72 cores with low base frequency (1.3–1.5 GHz). KNL offers AVX-512, a vector instruction set for 512-bit data words, which allows parallel computation on 16 single or 8 double precision floats. It also supports vector FMA (fused multiply-add) instructions (e.g., ) for further fine-grained parallelism. Each core can issue two such instructions per cycle, which yields a theoretical single precision peak performance of 64 floating point operations (flops) per cycle. Additionally, AVX-512 introduces gather-scatter intrinsics facilitating computations on sparse data formats. The KNL cores are joined in pairs called tiles located on a 2D mesh. Each core has its own 32 KB L1 cache and each tile has a 1 MB L2 cache. The latter supports two reads and one write every two cycles. This bandwidth is shared between two cores. Each core can host up to four hardware threads. KNL comes with two types of memory: up to 384 GB of DRAM (6 channels with an aggregate bandwidth of 80 GB/s as measured with the STREAM benchmark [7]) and 16 GB of high-bandwidth MCDRAM (8 channels and up to 440 GB/s respectively). The MCDRAM is configurable to work in one of three different modes: 1) cache modein which it is used as L3 cache, 2) flat modein which it serves as a scratchpad, i.e., a software-controlled memory (in this mode, there is no L3 cache), 3) hybrid modein which part is working in cache mode and part in flat mode. In this paper, we use a KNL with 72 cores, 1.5 GHz base frequency, and 192 GB of DRAM in flat mode. The flat mode allows us to clearly separate the memory needed by the subtasks characterized in the next section.

Iii Method Description

Our scheme uses block asynchronous coordinate descent with duality-gap based selection as described in Section II-B. The workflow, as illustrated in Figure 1, can be described as two tasks and running in parallel. Task is responsible for computing duality gap values based on the current model and the shared vector . These values are then stored in a vector which we call gap memory. In parallel to task , task performs updates on a subset of coordinates, which are selected based on their importance measure. For computing the updates on we opt to use parallel asynchronous SCD. Since operates only on batches of data, it is typically faster than . Therefore, it is very likely that is not able to update all gaps during a single execution of task and some entries of the gap memory become stale as the algorithm proceeds. In practice, the algorithm works in epochs. In each epoch, updates the batch of selected coordinates, where each coordinate is processed exactly once. At the same time, randomly samples coordinates and computes with the most recent (i.e., obtained in the previous epoch) parameters and updates the respective coordinate of the gap memory. As soon as the work of is completed, it returns the updated and to . pauses its execution to select a new subset of coordinates to send to , based on the current state of the gap memory . The robustness to staleness in the duality gap based coordinate selection scheme has been empirically shown in [4].

Fig. 1: Visualization of the HTHC approach.

Iii-a Implementation challenges

We identify the most computationally expensive parts of the proposed scheme. Task computes the coordinate-wise duality gaps (2), which requires an inner product between , computed from , and the respective data column :


is a scalar function defined by the learning model with negligible evaluation cost.

Task performs coordinate descent on the selected subset of the data. Thus, in each iteration of the algorithm, a coordinate descent update on one entry of is performed, i.e., . Again this update takes the form


where  is a scalar function. The optimal coordinate update has a closed-form solution [8, 9] for many applications of interest, and otherwise allows a simple gradient-step restricted to the coordinate . With every update on we also update accordingly: , to keep these vectors consistent. The asynchronous implementation of SCD introduces two additional challenges: First, staleness of the model information used to compute updates might slow down convergence or even lead to divergence for a large number of parallel updates. Second, writing to shared data requires synchronization, and generates write-contention which needs to be handled by appropriate locking.

Iv Implementation on Manycore Processors

The main contribution of this paper is to show that a scheme for learning GLMs based on multiple heterogeneous tasks is an efficient solution for implementation on a standalone, state-of-the-art manycore system such as KNL. As we will see, our approach is typically over an order of magnitude faster than simple C++ code with basic OpenMP directives. Due to the need for careful synchronization, locking and separation of resources, a straightforward implementation is not efficient in the manycore setting: a detailed calibration to the hardware resources and an associated implementation with detailed thread control is the key. In the following we will detail the challenges and key features for achieving an efficient implementation.

Iv-a Parallelization of the workload

Our implementation uses four levels of parallelism: 1) and are executed in parallel. 2) performs updates of in parallel and performs parallel coordinate updates. 3) uses multiple threads for each vector operation. 4) The main computations are vectorized using AVX-512.

Iv-A1 Allocation of resources to the tasks

To map HTHC onto the KNL we divide compute and memory resources among the two tasks and . We divide the compute resources by assigning separate sets of cores (in fact tiles for better data sharing) to each task. The respective number of cores is a parameter that makes our implementation adaptive to the problem and target platform. We use the threading library pthreads for fine-grained control over thread affinity, synchronization, and locking over critical regions of the code. For more details we refer to Section IV-F. To split the memory resources between the two tasks we use the KNL in flat mode where the MCDRAM memory serves as a scratchpad. This setting is particularly suited for our scheme because we can allocate the data for to DRAM and the data for to MCDRAM. As a consequence, saturating the memory bandwidth by one task will not stall the other.

Iv-A2 Parallelization of the individual tasks

For , we use only one thread for every update of a single due to the high risk of deadlocks when computations on are finished and receives a signal to stop. The number of threads used for is a parameter used for adaptation.

In contrast to , performs updates in parallel and also parallelizes the inner product computation of each update across threads. Thus, the total number of threads used by is . Both are parameters in our implementation. When threads are used per update, and the corresponding are split into equal chunks.

A simple model can be used to determine a good choice for as explained next. The performance of both the inner product and the update is limited by the memory bandwidth. For this reason, it is desirable that , which is reused, stays in cache. To achieve this, the cache has to hold and two columns , . Since and have the same length, this means the chunk size should be about a third of the cache size, i.e., about 87,000 single precision numbers for the L2 caches in KNL. Optimizing with the same reasoning for the 32KB L1 cache would yield a length of below 4096 elements. Such short vectors would not benefit from parallelism due to issues discussed later. Thus, we do not consider this setup applicable to the L1 caches. The best choice for is influenced by several factors as will be discussed in Section IV-F.

Iv-A3 Vectorization with AVX-512

We implemented both the scalar product (executed on both and ) and the incrementation of (performed on ) using AVX-512 FMA intrinsics with multiple accumulators for better instruction-level parallelism. The peak single core performance of KNL is 64 flops/cycle, but in the scalar product, each FMA requires two loads from L2 cache, reducing the peak to 16. In practice, our entire coordinate update achieves about 7.2 flops/cycle, about three times faster than without AVX.

Iv-B Synchronization

Task does not write to shared variables and thus requires no synchronization between threads. In contrast, the updates on are performed with multiple threads per vector as explained above. For the updates in (4), three barriers are required to separate the resetting of the shared result from the scalar product and the computation of based on the new shared result.

For the implementation we use pthreads which provides synchronization mechanisms with mutexes and thread barriers. Since barriers are relatively expensive, we replace them with a mechanism based on integer counters protected by mutexes similar to [10].

In addition to synchronization per thread, we need to coordinate running and stopping the tasks at the beginning and the end of each epoch (see Fig. 1). To avoid the overhead of creating and destroying threads, we use a thread pool with a constant number of threads for and . To synchronize, we use another counter-based barrier scheme similar to the one described above.

Iv-C Atomic operations

We enforce atomic updates to the shared vector to preserve the primal-dual relationship between and and thus maintain the convergence guarantees of asynchronous SCD derived by Hsieh et al. [11]. The pthreads library does not provide atomic operations, but the mutexes can be used to lock chosen variables. To avoid overhead, we use medium-grained locks for chunks of 1024 vector elements.

Iv-D Sparse representation

To efficiently support also sparse datasets, we use a special data structure for akin to the CSC (compressed sparse-column) format, while and remain in dense format. is represented as an array of structures containing pointers, one for each column. Each column contains only the nonzero elements, encoded as (index, value) pairs. stores its own data columns in a similar way, with the columns additionally split into chunks of a fixed length, implemented as linked lists. This way, efficient movement of columns between and into preallocated space is possible, accommodating possible large differences in column length. The minimal chunk size of 32 enables the use of multiple AVX-512 accumulators, but the optimal size depends on the density of . We use locking as described in the previous section. Since the locks are fixed to equal intervals of the dense vector , the number of operations performed under a given lock depends on the density of the corresponding interval of and 1024 might no longer be an efficient choice of the lock size and vary on each dataset. Initially, allocates a number of empty chunks determined by the densest columns in , and places the chunks on a stack. When copies data to , the pointers to chunks are obtained from the stack and rearranged into the linked lists, long enough to accommodate the new set of columns processed by . Next, the data of is copied to the corresponding lists. At the same time, the pointers to the chunks linked by the lists corresponding to the columns which are swapped out of are returned to the stack. With this representation, we observe fastest runtime when one thread is used per vector: in most cases, the sparse vectors are shorter than 130,000 elements.

Iv-E Quantized representation

Stochastic quantization to reduced bit width reduces data size while still maintaining performance for many iterative optimization and machine learning algorithms, e.g. [12]. To investigate and provide potential benefits, we extend HTHC with support for 4-bit quantization using an adaptation of the Clover library [13], which provides efficient quantized low-level routines including the scalar product. We find that 4-bit precision is enough to represent the data matrix , without significantly sacrificing model accuracy. For and , low precision results in excessive error accumulation; thus we leave those at 32-bit floating point. The overall benefit is reduced data movement at the overhead of packing and unpacking 4-bit data for computation. We show runtime results in Section V.

Iv-F Balancing compute resources

A major challenge posed by the implementation of HTHC is how to balance the computing resources across the different levels of parallelism as discussed in Section IV-A. The configuration of HTHC is parameterized by , , and , and can be adjusted to the hardware and problem at hand. We identified two important factors that impact the optimal setting:

Balanced execution speed

If works significantly faster than , the latter executes only few updates. As a consequence most coordinate importance values become stale, and convergence suffers. This effect has empirically been investigated in [4], which showed that satisfactory convergence requires about 15% or more of the being updated in each epoch. We will discuss this further in Section V. On the other hand, if is too slow, the runtime suffers. Hence, the efficiency of the implementation is a crucial factor that impacts the best configuration.

Cache coherence

The parallelization of the gap memory updates on across a large number of threads can lead to DRAM bandwidth saturation. Additionally, more threads mean higher traffic on the mesh, which can impact the execution speed of . For fast convergence, the threads must be assigned so that performs a sufficient fraction of updates in each epoch. Our results will confirm as a safe choice.

Performance model

Let us consider dense data. Recall that we operate on the data matrix , where each of the coordinates corresponds to a column represented by vector of length , and that processes coordinates per epoch. Let denote the time of a single coordinate update on task with vector length . This function is not trivial to derive, due to relatively poor scalability of the operations used and the dependence on memory and synchronization speed. Thus, we precompute the values for different thread setups and during installation and store them in a table. Using this table, we then use the following model to obtain the thread counts:

Fig. 2: Performance (in flops/cycle) of synthetic operations. Different labels represent different values of .
(a) (b) (c) (d)
Fig. 3: Performance (in flops/cycle) of operations of task for different numbers of parallel updates. Different curves represent different values of .

V Experimental Results

We perform two sets of experiments. In the first set, we profile HTHC on dense synthetic data with the aim to understand and illustrate how the different implementation parameters impact its performance. In the second set, we benchmark HTHC on KNL on real-world data from small to very large. We compare against a number of more straightforward variants of implementing the same algorithm including standard C++ code with OpenMP directives, and against prior software where available.

All experiments are run on a KNL in flat mode as described in Section II-C. We compile our code with the Intel Compiler Collection and flags -std=c++11 -pthread -lmemkind -lnuma -O2 -xCOMMON-AVX512 -qopenmp. In all experiments, we use at most one thread per core and single precision.

V-a Algorithm profiling

To simulate different values of for different vector size (Section IV-F), we imitate the expensive operations of the tasks and on dense synthetic data. The code measures the overall time and performance for different vector sizes and thread numbers. The operations involve the data matrix of size and the shared vector , with threading and synchronization implemented as described in Section IV. In the following we will illustrate results for varying data size ( and ranging from 10,000 to 5,000,000).

To analyze the impact of the parameter on the performance of task , we allocate both data structures to DRAM and measure performance for ranging from 1 to 72. The results are presented in Fig. 2. We observe that above 20 parallel updates, the performance does not increase significantly and above 24 it even begins to decrease and fluctuate due to the saturation of DRAM bandwidth. For this reason, we use at most 24 threads for .

To analyze the impact of the parameter and on the performance of task , we allocate and to MCDRAM. Fig. 3 illustrates the impact of and shows results for . The single noisy data points in plots drawn for larger numbers of threads are caused by background processes which stall the execution of the program on particular cores. We note that below it is best to use one thread per vector, independent of the number of parallel updates. For larger vectors, the best strategy is to use as many threads per vector as possible. We observe that for the vector lengths considered, higher performance is obtained with more parallel updates rather than more threads per vector.

Fig. 4 shows the speedup of isolated runs with different values of over a run with . For each value of , we plot results for the runs with the best corresponding . We observe that the algorithm used by does not scale well. This is due to many synchronization points during updates. Profiling with Intel VTune shows that while the bandwidth of L2 caches is a bottleneck on each tile, the saturation of MCDRAM bandwidth remains low. For this reason, we benefit from the flat mode, since it keeps MCDRAM as a separate allocation space. The raw update speed of , contrary to the convergence of the complete scheme, is not affected by too many parallel updates of . In practice, the optimal value for is rarely the maximum, as we will see in the following experiments.

Dataset Samples Features Representation Approx. Size
Epsilon [14] 400,000 2,000 Dense 3.2 GB
DvsC [15] 40,002 200,704 Dense 32.1 GB
News20 [16] 19,996 1,355,191 Sparse 0.07 GB
Criteo [17] 45,840,617 1,000,000 Sparse 14.4 GB
TABLE I: Data sets used in the experiments

V-B Performance evaluation

The second series of experiments compares the performance of HTHC to several reference schemes of the two selected linear models across three data sets of different size. We consider Lasso and SVM on the two dense and two sparse data sets in Table I. Dogs vs. Cats (abbrieviated as DvsC in our tables) features were extracted as in [15] and the number of samples was doubled. The same pre-processing was used in [4]. The regularization parameter was obtained to provide a support size of 12% for Lasso on Epsilon and Dogs vs. Cats, and using cross validation in all other cases.

V-B1 Comparison to our baselines

In the following we will denote HTHC as emphasizing that it runs two tasks, and . As detailed in Section IV-A1, HTHC allocates the data for to DRAM and the data for to MCDRAM. For each experiment, except those on the Criteo dataset, we used exhaustive search to find the best parameter settings, i.e., percentage of data updated by per epoch , and the thread settings described in Section IV. The obtained parameters presented in Tables IIIII roughly correspond to the analysis in Section IV-F. We compare HTHC against four reference implementations:

Data set settings for settings for ST
Epsilon 3e-4 8% 12 8 6 60 8 9 72
DvsC 2.5e-3 2% 16 14 1 30 20 1 20
News20 1e-4 2% 24 12 1 36 56 1 56
Criteo 1e-6 0.1% 8 64 1 72 72 1 72
TABLE II: Best parameters found for Lasso.
Data set settings for settings for ST
Epsilon 1e-4 4% 16 2 1 18 2 1 2
DvsC 1e-4 7% 8 6 10 68 36 2 72
News20 1e-5 49% 12 56 1 72 72 1 72
Criteo 1e-6 1% 4 68 1 72 72 1 72
TABLE III: Best parameters found for SVM.
  1. ST (single task): We consider a parallel, but homogeneous single task implementation, which allocates the data matrix to DRAM and the remaining data to MCDRAM. It performs randomized asynchronous SCD. We used the same low-level optimizations in ST than in task of HTHC but without duality-gap-based coordinate selection. Instead, in each epoch we update (allocated to MCDRAM) for all coordinates of . Again, we run a search for the best parameters. These are shown in Tables II and III.

  2. ST (): Like ST but run with the best setting of and for .

  3. OMP: A straightforward implementation of : standard looped C code using the OpenMP directives simd reduction and parallel for for parallelization with the thread counts , and . To synchronize the updates of , we use directive atomic.

  4. OMP WILD is as OMP, but without the atomic directive.

We perform OMP runs only for the dense representations. For the large Criteo dataset, we consider only and ST due to the long time to run all experiments.

Fig. 4: Speedup of runs with different number of parallel updates over runs with a single update on .
Lasso, Epsilon Lasso, Epsilon SVM, Epsilon Lasso, Dogs vs. Cats Lasso, Dogs vs. Cats SVM, Dogs vs. Cats Lasso, News20 Lasso, News20 SVM, News20 Lasso, Criteo Lasso, Criteo SVM, Criteo
Fig. 5: Convergence for Epsilon, Dogs vs. Cats, News20 and Criteo.

Fig. 5 shows the results for Lasso and SVM for each data set. Each plot shows the precision of the algorithm versus the running time. For Lasso, we measure suboptimality, for Lasso and SVM we show the duality gap.111To compute the duality gap for Lasso we use the Lipschitzing trick from [18]. Each algorithm is run until the duality gap reaches a parametrized threshold value or until timeout.

First, we discuss the , ST, and ST (). For all Lasso runs, we observe a speedup varying from about 5 for Epsilon to about 9 for News20 compared to the best ST run, depending on the desired precision. As expected, ST () is never better than ST since the latter uses the best parameters found during the search. The results for suboptimality are consistent with those for the duality gaps.

For the SVM runs, we achieve 3.5 speedup for Dogs vs. Cats and competitive performance for Epsilon and News20.

For Criteo we observe that the ST implementations beats . This is mostly due to skipping the update when . This leads to selection of relevant data based on the result of , and avoids expensive locking at the same time: thus, in some cases, ST drops enough operations to beat the execution time and the overhead of .

Next we discuss the OpenMP runs. For OMP, as expected, the atomic operations severely impact performance and thus OMP WILD is much faster than OMP. While OMP WILD is also faster than the standard HTHC implementations, it does not guarantee the primal-dual relationship between and and thus does not converge to the exact minimizer; hence the plateau in the figures presenting suboptimality. The duality gap computation is based on , and thus do not correspond to the true values: therefore, the gap of OMP WILD eventually becomes smaller than suboptimality. The OMP run fails to converge on the Dogs vs. Cats dataset with the used parameters.

Data set Accuracy ST PASSCoDe- PASSCoDe-
atomic wild
Epsilon 85% 0.35 s 1.11 s 0.70 s 0.64 s
DvsC 95% 0.51 s 0.69 s 2.69 s 1.66 s
News20 99% 0.14 s 0.06 s 0.02 s 0.01 s

TABLE V: Comparison of and ST against Vowpal Wabbit for Lasso.
Data set Squared error ST Vowpal Wabbit
Epsilon 0.47 0.56 s 0.62 s 12.19 s
DvsC 0.15 5.91 s 23.37 s 47.29 s
News20 0.32 0.94 s 0.76 s 0.02 s
TABLE IV: Comparison of and ST against PASSCoDe (no support for Lasso) for SVM.

V-C Comparison against other parallel CD implementations

The work [4] implements a similar scheme for parallelizing SCD on a heterogeneous platform: an 8-core Intel Xeon E5 x86 CPU with NVIDIA Quadro M4000 GPU accelerator (we note that this is a relatively old GPU generation: the newer accelerators would give better results). It provides results for Dogs vs. Cats with updates set to 25% (the largest size that fits into GPU RAM): a suboptimality of is reached in 40 seconds for Lasso and a duality gap of is reached in about 100 seconds for SVM. With the same percentage of updates, HTHC needs 29 and 84 seconds, respectively. With our best setting (Fig. 55) this is reduced to 20 and 41 seconds, respectively. In summary, on this data, our solution on the standalone KNL is competitive with a state-of-the-art solution using a GPU accelerator with many more cores. We also show that its performance can be greatly improved with proper number of updates on .

Additionally, we compare the SVM runs of HTHC () and our parallel baseline (ST) against PASSCoDe [11], a state-of-the-art parallel CD algorithm, which, however does not support Lasso. We compare SVM against the variant with atomic lock on (PASSCoDe-atomic) and a lock-free implementation (PASSCoDe-wild) which is faster, but does not maintain relationship between model parameters as discussed in Section IV-C. The results are presented in Table V. On Epsilon, the time to reach 85% accuracy with 2 threads (the same as for ST) is 8.6 s for PASSCoDe-atomic and 3.21 s for PASSCoDe-wild, but these times decrease to 0.70 s with 24 threads and 0.64 s with 12 threads respectively. For Dogs vs. Cats, greatly increasing or decreasing the compared to ST did not improve the result. For Dogs vs. Cats, we are 2.4–5 faster, depending on the versions we compare. For Epsilon, we are roughly 2 faster, but exploiting the the HTHC design is required to prevent slowdown. On the other hand, PASSCoDe works ca. 7 faster for the News20 dataset. We attribute this to our locking scheme for update which is beneficial for dense data, but can be wasteful for sparse representations. Disabling the locks brings the ST execution time down to 0.02 s.

We also compare the Lasso runs against Vowpal Wabbit (VW) [19], which is considered a state-of-the-art software. Since this library does not implement coordinate descent, we opt for stochastic gradient descent. We run the computation on previously cached data. We find that too many nodes cause divergence for dense datasets and opt for 10 nodes as a safe value. For News20, we use a single node. We compare the average squared error of HTHC against the progressive validation error of VW. The results are presented in Table V. Again we observe that while we perform well for dense data, the training on sparse data is slow. Also, the runs on our code and Vowpal Wabbit’s SGD result in two different scores for News20.

(a) Lasso on Epsilon (b) Lasso on Dogs vs. Cats (c) SVM on Dogs vs. Cats
Fig. 6: Parameter combinations (, ) providing fast convergence (within 110% time of the best found).
Fig. 7: Sensitivity to the number of updates per epoch for (left) Lasso on Epsilon and (right) SVM on Dogs vs. Cats.

V-D Experiments on sensitivity

During the search for , four parameters were considered: size of , , and . Our goal was not only to find the best parameters, but also identify parameters giving a close-to-best solution. Fig. 6 presents parameters which provided no more than overall 110% convergence time of the best solution found. The overall convergence time depends on the number of epochs which varies from run to run for the same parameters: therefore, we consider all the presented parameters capable of obtaining the minimum runtime. The plots present four dimensions: the axes correspond to and while different markers correspond to different . The labels next to the markers correspond to . The color of each label corresponds to its marker. Multiple values per label are possible. To save time during the search, we use a step of 8 and 4 for on Dogs vs. Cats and Epsilon respectively. Additionally, we use a step of 2 for on both datasets. We also note that while Lasso on Epsilon converges fast for greater than 8, the rate of diverging runs is too high to consider it for practical application.

To examine how the number of updates per epoch on affects the convergence, we run tests in which we always let perform a fixed number of updates. We use the best parameters found for different datasets and models, but we set . We present example results in Fig. 7. We observe that relatively few updates are needed for the best execution time : we observe it for 10% on the both datasets. While these runs need more epochs to converge, the epochs are executed fast enough to provide optimal overall convergence speed.

Dataset Model Target gap 32-bit 32/4-bit
Epsilon Lasso 1.6 s 1.0 s
Epsilon SVM 5.5 s 5.8 s
DvsC Lasso 55.5 s 32.4 s
DvsC SVM 38.2 s 51.6 s
TABLE VI: Comparison of 32-bit to mixed 32/4-bit.

V-E Evaluation of quantized representation

We run experiments on the dense datasets using the quantized 4-bit representation of the data matrix with the modified Clover library. Table VI shows the comparison of the fastest runs using the mixed 32/4-bit arithmetic to the fastest 32-bit runs. We can observe that while we reduce the data size, the computation times do not deviate significantly from those obtained with 32-bit representation.

Vi Related Work

Variants of stochastic coordinate descent [20] have become the state-of-the-art methods for training GLMs on parallel and distributed machine learning systems. Parallel coordinate descent (CD) has a long history, see e.g. [21]. Recent research has contributed to asynchronous variants such as [22] who proposed AsySCD, the first asynchronous SCD algorithm, and [11] who proposed the more practical PaSSCoDe algorithm which was the first to keep the shared vector in memory.

There are only few works that have studied CD on non-uniform memory systems (e.g. memory and disk). The approach most related to ours is [23] where the authors proposed a strategy to keep informative samples in memory. However, [23] is specific to the SVM problem an unable to generalize to the broader class of GLMs. In [24] a more general setting was considered, but the proposed random (block) coordinate selection scheme is unable to benefit from non-uniformity in the training data. In a single machine setting, various schemes for selecting the relevant coordinates for CD have been studied, including adaptive probabilities, e.g. [25] or fixed importance sampling [5]. The selection of relevant coordinates can be based on the steepest gradient, e.g. [26], Lipschitz constant of the gradient [27], nearest neighbor [28] or duality gap based measures [4]. In this work, we build on the latter, but any adaptive selection scheme could be adopted.

Manycore machines, including KNL, are widely used for deep learning, as standalone devices or within clusters, e.g. [29, 30]. SVM training on multicore and manycore architectures was proposed by You et al. [31]. The authors provide evaluation for Knights Corner (KNC) and Ivy Bridge, proving them to be competitive with GPUs. The LIBSVM library [32] is implemented for both GPU [33] and KNC [34]. All SVM implementations use the sequential minimization algorithm [35]. The library and its implementations are more fitted for kernel SVM than the linear version. For training on large-scale linear models, a multi-core extension of LIBLINEAR [36] was proposed by Chiang et al. [37]. This library is tailored mainly for sparse data formats used e.g. in text classification. While [37, 11] do not perform coordinate selection, they use techniques like shrinking benefitting from increasing sparsity of the output models. Rendle et al. [38] introduced coordinate descent for sparse data on distributed systems, achieving almost linear scalability: their approach can be applied to multi- and manycore. The existing open-source libraries support mainly sparse data and rarely implement CD models other than SVM or logistic regression.

Vii Conclusions

We introduced HTHC for training general linear models on standalone manycores including a complete, architecture-cognizant implementation. We support dense, sparse and quantized 4-bit data representations. We demonstrated that HTHC provides a significant reduction of training time as opposed to a straightforward parallel implementation of coordinate descent. In our experiments, the speedup varies from 5 to more than 10 depending on the data set and the stopping criterion. We also showed that our implementation for dense datasets is competitive against the state-of-the-art libraries and a CPU-GPU code. An advantage of HTHC over the CPU-GPU heterogeneous learning schemes is the ability of balancing distribution of machine resources such as memory and CPU cores between different tasks, an approach inherently impossible on heterogeneous platforms. To the best of our knowledge, this is the first scheme with major heterogeneous tasks running in parallel proposed in the field of manycore machine learning. The inherent adaptivity of HTHC should enable porting it to other existing and future standalone manycore platforms.


  • [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard et al., “TensorFlow: A system for large-scale machine learning.” in OSDI, vol. 16, 2016, pp. 265–283.
  • [2] S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cuDNN: Efficient primitives for deep learning,” CoRR, vol. abs/1410.0759, 2014.
  • [3] J. M. Moura, M. Püschel, D. Padua, and J. Dongarra, “Special issue on program generation, optimization, and platform adaptation,” Proceedings of the IEEE, vol. 93, no. 2, pp. 211–215, 2005.
  • [4] C. Dünner, T. Parnell, and M. Jaggi, “Efficient use of limited-memory accelerators for linear learning on heterogeneous systems,” in Advances in Neural Information Processing Systems, 2017, pp. 4258–4267.
  • [5] P. Zhao and T. Zhang, “Stochastic optimization with importance sampling for regularized loss minimization,” in Proceedings of the 32nd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 37, 2015, pp. 1–9.
  • [6] S. P. Boyd and L. Vandenberghe, Convex optimization.   Cambridge University Press, 2004.
  • [7] J. McCalpin, “Memory bandwidth and machine balance in high performance computers,” pp. 19–25, 12 1995.
  • [8] S. Shalev-Shwartz and T. Zhang, “Stochastic dual coordinate ascent methods for regularized loss minimization,” Journal of Machine Learning Research, vol. 14, no. Feb, pp. 567–599, 2013.
  • [9] S. J. Wright, “Coordinate descent algorithms,” Mathematical Programming, vol. 151, no. 1, pp. 3–34, Mar. 2015.
  • [10] F. Franchetti. (2005) Fast barrier for x86 platforms. [Online]. Available: www.spiral.net/software/barrier.html
  • [11] C.-J. Hsieh, H.-F. Yu, and I. Dhillon, “PASSCoDe: Parallel ASynchronous Stochastic dual Co-ordinate Descent,” in International Conference on Machine Learning, 2015, pp. 2370–2379.
  • [12] H. Zhang, J. Li, K. Kara, D. Alistarh, J. Liu, and C. Zhang, “Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70.   JMLR. org, 2017, pp. 4035–4043.
  • [13] A. Stojanov, T. M. Smith, D. Alistarh, and M. Püschel, “Fast quantized arithmetic on x86: Trading compute for data movement,” in 2018 IEEE International Workshop on Signal Processing Systems (SiPS).   IEEE, 2018, pp. 349–354.
  • [14] Knowledge 4 All Foundation Ltd. (2008) Large scale learning challenge. [Online]. Available: www.k4all.org/project/large-scale-learning-challenge/
  • [15] C. Heinze, B. McWilliams, and N. Meinshausen, “Dual-loco: Distributing statistical estimation using random projections,” in Artificial Intelligence and Statistics, 2016, pp. 875–883.
  • [16] R.-E. Fan. (2018) LIBSVM data: Classification (binary class). [Online]. Available: www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html
  • [17] Criteo Labs. (2014) Kaggle display advertising challenge. [Online]. Available: www.labs.criteo.com/2014/02/kaggle-display-advertising-challenge-dataset/
  • [18] C. Dünner, S. Forte, M. Takáč, and M. Jaggi, “Primal-dual rates and certificates,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ser. ICML’16, 2016, pp. 783–792.
  • [19] J. Langford, L. Li, and A. Strehl, “Vowpal Wabbit,” 2011. [Online]. Available: www.github.com/VowpalWabbit/vowpal_wabbit
  • [20] S. J. Wright, “Coordinate descent algorithms,” Mathematical Programming, vol. 151, no. 1, pp. 3–34, 2015.
  • [21] P. Richtárik and M. Takáč, “Parallel coordinate descent methods for big data optimization,” Mathematical Programming, vol. 156, no. 1-2, pp. 433–484, 2016.
  • [22] J. Liu, S. J. Wright, C. Ré, V. Bittorf, and S. Sridhar, “An asynchronous parallel stochastic coordinate descent algorithm,” J. Mach. Learn. Res., vol. 16, no. 1, pp. 285–322, Jan. 2015.
  • [23] K.-W. Chang and D. Roth, “Selective block minimization for faster convergence of limited memory large-scale linear models,” in Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2011, pp. 699–707.
  • [24] S. Matsushima, S. Vishwanathan, and A. J. Smola, “Linear support vector machines via dual cached loops,” in Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2012, pp. 177–185.
  • [25] D. Perekrestenko, V. Cevher, and M. Jaggi, “Faster coordinate descent via adaptive importance sampling,” Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, vol. 54, 2017.
  • [26] Y. You, X. R. Lian, J. Liu, H.-F. Yu, I. S. Dhillon, J. Demmel, and C.-J. Hsieh, “Asynchronous parallel greedy coordinate descent,” in Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016, pp. 4689–4697.
  • [27] A. Zhang and Q. Gu, “Accelerated stochastic block coordinate descent with optimal sampling,” in Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 2035–2044.
  • [28] I. E.-H. Yen, C.-F. Chang, T.-W. Lin, S.-W. Lin, and S.-D. Lin, “Indexed block coordinate descent for large-scale linear classification with limited memory,” in Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2013, pp. 248–256.
  • [29] N. Gawande, J. B. Landwehr, J. A. Daily, N. R. Tallent, A. Vishnu, and D. J. Kerbyson, “Scaling deep learning workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing,” 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 399–408, 2017.
  • [30] Y. You, Z. Zhang, C.-J. Hsieh, and J. Demmel, “100-epoch ImageNet training with AlexNet in 24 minutes,” CoRR, vol. abs/1709.05011, 2017.
  • [31] Y. You, S. L. Song, H. Fu, A. Marquez, M. M. Dehnavi, K. Barker, K. W. Cameron, A. P. Randles, and G. Yang, “MIC-SVM: Designing a highly efficient support vector machine for advanced modern multi-core and many-core architectures,” in 2014 IEEE 28th International Parallel and Distributed Processing Symposium, 2014, pp. 809–818.
  • [32] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, pp. 27:1–27:27, May 2011.
  • [33] A. Athanasopoulos, A. Dimou, V. Mezaris, and I. Kompatsiaris, “GPU acceleration for support vector machines,” in Procs. 12th Inter. Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS 2011), Delft, Netherlands, 2011.
  • [34] R. Massobrio, S. Nesmachnow, and B. Dorronsoro, “Support vector machine acceleration for Intel Xeon Phi manycore processors,” in Latin American High Performance Computing Conference.   Springer, 2017, pp. 277–290.
  • [35] J. Platt, “Sequential minimal optimization: A fast algorithm for training support vector machines,” Tech. Rep., April 1998.
  • [36] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin, “Liblinear: A library for large linear classification,” J. Mach. Learn. Res., vol. 9, pp. 1871–1874, Jun. 2008.
  • [37] W.-L. Chiang, M.-C. Lee, and C.-J. Lin, “Parallel dual coordinate descent method for large-scale linear classification in multi-core environments,” in Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1485–1494.
  • [38] S. Rendle, D. Fetterly, E. J. Shekita, and B.-y. Su, “Robust large-scale machine learning in the cloud,” in Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1125–1134.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description