Julia Language in Machine Learning: Algorithms, Applications, and Open Issues

Julia Language in Machine Learning: Algorithms, Applications, and Open Issues

Abstract

Machine learning is driving development across many fields in science and engineering. A simple and efficient programming language could accelerate applications of machine learning in various fields. Currently, the programming languages most commonly used to develop machine learning algorithms include Python, MATLAB, and C/C ++. However, none of these languages well balance both efficiency and simplicity. The Julia language is a fast, easy-to-use, and open-source programming language that was originally designed for high-performance computing, which can well balance the efficiency and simplicity. This paper summarizes the related research work and developments in the application of the Julia language in machine learning. It first surveys the popular machine learning algorithms that are developed in the Julia language. Then, it investigates applications of the machine learning algorithms implemented with the Julia language. Finally, it discusses the open issues and the potential future directions that arise in the use of the Julia language in machine learning.

keywords:
Julia language, Machine learning, Supervised learning, Unsupervised learning, Deep learning, Artificial neural networks
1

1 Introduction

Machine learning is currently one of the most rapidly growing technical fields, lying at the intersection of computer science and statistics and at the core of artificial intelligence and data science Jordan and Mitchell (2015); Deo (2015); Domingos (2012); Riley (2019). Machine-learning technology powers many aspects of modern society, from web searches to content filtering on social networks to recommendations on e-commerce websites. Recent advances in machine learning methods promise powerful new tools for practicing scientists. This viewpoint highlights some useful characteristics of modern machine learning methods and their relevance to scientific applications LeCun et al. (2015); Mjolsness and DeCoste (2001); see Figure 1.

Figure 1: Main applications of machine learning.

Python, MATLAB, and C/C++ are widely used programming languages in machine learning. Python has proven to be a very effective programming language and is used in many scientific computing applications Serrano et al. (2017). MATLAB combines the functions of numerical analysis, matrix calculation, and scientific data visualization in an easy-to-use manner and operates the window. Both Python and MATLAB are ”Plug-and-Play” programming languages; the algorithms are prepackaged and mostly do not require learning processes, but they are used to solve tasks at a slow speed and have very strict requirements for memory and computing power Voulgaris (July 30, 2016). In addition, MATLAB is commercial software.

C/C++ is one of the main programming languages in machine learning. It is of high efficiency and strong portability. However, the development and implementation of machine learning algorithms with C/C++ is not easy due to the difficulties in learning and using C/C++. In machine learning, the availability of large data sets is increasing, and the demand for general large-scale parallel analysis tools is also increasing Dinari et al. (2019). Therefore, it is necessary to choose a programming language with both simplicity and good performance.

Julia is a simple, fast, and open-source language Bezanson et al. (2017). The efficiency of Julia is almost comparable to that of static programming languages such as C/C++ and Fortran Perkel (2019). Julia is rapidly becoming a highly competitive language in data science and general scientific computing. Julia is as easy to use as R, Python, and MATLAB.

Julia was originally designed for high-performance scientific computing and data analysis. Julia can call many other mature high-performance basic codes, such as linear algebra and fast Fourier transforms. Similarly, Julia can call C++ language functions directly without packaging or special application programming interfaces (APIs). In addition, Julia has special designs for parallel computing and distributed computing. In high-dimensional computing, Julia has more advantages than C++ Dinari et al. (2019). In the field of machine learning, Julia has developed many third-party libraries, including some for machine learning.

In this paper, we systematically review and summarize the development of the Julia programming language in the field of machine learning by focusing on the following three aspects:

(1) Machine learning algorithms developed in the Julia language.

(2) Applications of the machine learning algorithms implemented with the Julia language.

(3) Open issues that arise in the use of the Julia language in machine learning.

The rest of the paper is organized as follows. Section 2 gives a brief introduction to the Julia language. Section 3 summarizes the machine learning algorithms developed in Julia language. Section 4 introduces applications of the machine learning algorithms implemented with Julia language. Section 5 presents open issues occurring in the use of Julia language in machine learning. Finally, Section 6 concludes this survey.

2 A Brief Introduction to the Julia Language

Julia is a modern, expressive, and high-performance programming language for scientific computing and data processing. Its development started in 2009, and the current stable release as of September 2019 is v1.2.0. Although this low version number indicates that the language is still developing rapidly, it is stable enough to enable the development of research code. Julia’s grammar is as readable as that of MATLAB or Python, and it can approach the C/C++ language in performance by compiling in real time. In addition, Julia is a free, open-source language that runs on all popular operating systems.

Julia was originally developed by a team led by MIT computer scientist and mathematician Alan Edelman. Julia combines three key features of high-intensity computing tasks: it is fast, easy to learn and use, and open source. Among Julia’s competitors, C/C++ and Fortran are very fast, and available open-source compilers for them are excellent, but they are difficult to learn, especially for beginners who have no programming experience. Python and R are open source languages that are easy to learn and use, but their performance in numerical computation may be disappointing; MATLAB is relatively fast (although slower than Julia) and easy to learn and use, but it is commercial.

With the low-level virtual machine (LLVM)-based just-in-time (JIT) compiler, Julia provides a fast numerical computation speed Lattner and Adve (2004); Huo et al. (2020); see Figure 2. Julia also incorporates some important features from the beginning of its design, such as excellent support for parallelism Besard et al. (2018) and a practical functional programming orientation, which were not fully implemented in the development of scientific computing languages decades ago. Julia can also be embedded in other programming languages. These advantages have made Julia a universal language capable of handling tasks beyond scientific computing and data processing.

Figure 2: Julia benchmarks (the benchmark data shown above were computed with Julia v1.0.0, Go 1.9, Javascript V8 6.2.414.54, MATLAB R2018a, Anaconda Python 3.6.3, and R 3.5.0. C and Fortran are compiled with gcc 7.3.1, taking the best timing from all optimization levels. C performance = 1.0, smaller is better 6.)

3 Julia in Machine Learning: Algorithms

3.1 Overview

This section describes machine-learning algorithm packages and toolkits written either in or for Julia. Most applications of machine learning algorithms in Julia can be divided into supervised learning and unsupervised learning algorithms. However, more complex algorithms, such as deep learning, artificial neural networks, and extreme learning machines, include both supervised learning and unsupervised learning, and these require separate classification; see Figure 3.

Figure 3: Main machine learning algorithms

Supervised learning learns the training samples with class labels and then predicts the classes of data outside the training samples. All the markers in supervised learning are known; therefore, the training samples have low ambiguity. Unsupervised learning learns the training samples without class labels to discover the structural knowledge in the training sample set. All categories in unsupervised learning are unknown; thus, the training samples are highly ambiguous.

3.2 Supervised Learning Algorithms Developed in Julia

Supervised learning infers a model from labeled training data. Supervised learning algorithms developed in Julia mainly include classification and regression algorithms; see Figure 4.

Figure 4: Main supervised learning algorithms developed in Julia

Bayesian Model

There are two key parts of the naive Bayes definition of the Bayesian model: independence between features and the Bayesian Theorem. The Bayesian model is mainly used for image recognition and classification.

There are some Bayesian model packages and algorithms developed in mature languages. Strickland et al. Strickland et al. (2014) developed the Python package Pyssm, which was developed for time series analysis using a linear Gaussian state-space model. Mertens et al. Mertens et al. (2018) developed a user-friendly Python package Abrox for approximate Bayesian computation with a focus on model comparison. There are also Python packages BAMSE Toosi et al. (2019), BayesPy Luttinen (2016), PyMC Patil et al. (2010) and so on. Moreover, Vanhatalo et al. Jarno et al. (2012) developed the MATLAB toolbox GPstuff for Bayesian modeling with Gaussian processes, and Zhang et al. Zhang et al. (2012) developed the MATLAB toolbox BSmac, which implements a Bayesian spatial model for brain activation and connectivity.

The Julia language is also used to develop packages for the Bayesian model. For example, Cusumano and Mansinghka Cusumano-Towner and Mansinghka (2018) proposed a design for a probabilistic programming language called Gen, which is embedded in Julia, which aims to be sufficiently expressive and performant for general-purpose use. This language provides a structure for the optimization of the automatic generation of custom reasoning strategies for static analysis based on an objective probability model. They described Gen’s language design informally and used an example Bayesian statistical model for robust regression to show that Gen is more expressive than Stan, a widely used language for hierarchical Bayesian modeling. Cox et al. Cox et al. (2019) explored a specific probabilistic programming paradigm, namely, message passing in Forney-style factor graphs (FFGs), in the context of the automated design of efficient Bayesian signal processing algorithms. To this end, they developed ForneyLab.jl as a Julia Toolbox for message passing-based inference in FFGs.

Due to the increasing availability of large data sets, the need for a general-purpose massively parallel analysis tool is becoming ever greater. Bayesian nonparametric mixture models, exemplified by the Dirichlet process mixture model (DPMM), provide a principled Bayesian approach to adapt model complexity to the data. However, despite their potential, DPMMs have yet to become a popular tool. Dinari et al. Dinari et al. (2019) used Julia to implement efficient and easily modifiable distributed inference in DPMMs.

k-Nearest Neighbors (kNn)

The kNN algorithm has been widely used in data mining and machine learning due to its simple implementation and distinguished performance. A training data set with a known label category is used, and for a new data set, the k instances closest to the new data are found in the feature space of the training data set. If most of the instances belong to a category, the new data set belongs to this category.

At present, there are many packages developed for the kNN algorithm in the Python language. Among these, scikit-learn and Pypl are the most commonly used packages. It should be noted that scikit-learn and Pypl are not specially developed for the kNN algorithm; they contain many other machine learning algorithms. In addition, Bergstra et al. Bergstra et al. (2015) developed Hyperopt to define a search space that encompasses many standard components and common patterns of composing them.

Julia is also used to develop packages for the kNN algorithm. NearestNeighbors.jl 7 is a package written in Julia to perform high-performance nearest neighbor searches in arbitrarily high dimensions. This package can realize kNN searches and range searches.

Decision Tree, Regression Tree, and Random Forest

Mathematically, a decision tree is a graph that evaluates a limited number of probabilities to determine a reliable classification for each data point. A regression tree is the opposite of a decision tree and is suitable for solving regression problems. It does not predict labels but predicts a continuous change value. Random forests are a set of decision trees or regression trees that work together Breiman (2001). The set of decision trees (or continuous y regression trees) is constructed by performing bootstrapping on the data sets and averaging or acquiring pattern prediction (called ”bagging”) from the trees. Subsampling of features is used to reduce generalization errors Ho (Conference Proceedings). An ancillary result of the bootstrapping procedure is that the data not sampled in each bootstrap (called ”out-of-bag” data) can be used to estimate the generalization error as an alternative to cross-validation Zhou and Gallins (2019).

Many packages have been developed for decision trees, regression trees, and random forests. For example, the above three algorithms are implemented in Spark2 ML and scikit-learn using Python. In addition, Upadhyay et al. Upadhyay et al. (2016) proposed land-use and land-cover classification technology based on decision trees and k-nearest neighbors, and the proposed techniques are implemented using the scikit-learn data mining package for python. Keck Keck (2016) proposed a speed-optimized and cache-friendly implementation for multivariate classification called FastBDT, which provides interfaces to C/C++, Python, and TMVA. Yang et al. Yang et al. (2018) used the ODPS (open data processing service) and Python to implement the gradient-boosting decision tree (GBDT) model.

DecisionTree.jl Seiferling et al. (2017), written in the Julia language, is a powerful package that can realize decision tree, regression tree, and random forest algorithms very well. The package has two functions, and the ingenious use of these functions can help us realize these three algorithms.

Support Vector Machine

In Support Vector Machine (SVM), the goal is to find a hyperplane in high-dimensional space, which represents the maximum margin between any two instances of two types of training data points (support vectors) or maximizes the correlation function when it cannot be separated. The so-called kernel similarity function is used to design the non-linear support vector machine Vapnik (2013).

Currently, there are textbook style implementations of two popular linear SVM algorithms: Pegasos Shalev-Shwartz et al. (2011), Dual Coordinate Descent. LIBSVM developed by the Information Engineering Institute of Taiwan University is the most widely used SVM tool Chang and Lin (2011). LIBSVM includes standard SVM algorithm, probability output, support vector regression, multi-classification SVM and other functions. Its source code is originally written by C. It has Java, Python, R, MATLAB, and other language invocation interfaces.

SVM.jl Kebria et al. (2020), MLJ.jl Parmar et al. (2019), and LIBSVM.jl 8 are native Julia implementations of SVM algorithm. However, LIBSVM.jl is more comprehensive than SVM.jl. LIBSVM.jl supports all libsvm models: classification c-svc, nu-svc, regression: epsilon-svr, nu-svr and distribution estimation: a class of support vector machines and ScikitLearn.jl Gwak et al. (2019) API. In addition, the model object is represented by a support vector machine of Julia type. The support vector machine can easily access the model features and can be saved as a JLD file.

Regression Analysis

Regression analysis is an important supervised learning algorithm in machine learning. It is a predictive modeling technique, which constructs the optimal solution to estimate unknown data through the sample and weight calculation. Regression analysis is widely used in the fields of the stock market and medical data analysis.

Python has been widely used to develop a variety of third-party packages for regression analysis, including scikit-learn and orange. The scikit-learn package is a powerful Python module, which supports mainstream machine learning algorithms such as regression, clustering, classification and neural network Abraham et al. (2014); Jovic et al. (2014); Pedregosa et al. (2011). The orange package is a component-based data mining software, which can be used as a module of Python programming language, especially suitable for classification, clustering, regression and other work Demsar et al. (2013, 2004). MATLAB also supports the regression algorithm. Via commands such as regress and stepwise in the statistical toolbox of MATLAB, regression operation can be performed conveniently on the computer.

The Julia language is also used to develop a package to perform regression analysis such as Regression.jl Shan et al. (2019). The Regression.jl package seeks to minimize empirical risk based on EmpiricalRisk.jl Arnold et al. (2019) and provides a set of algorithms for performing regression analysis. It supports multiple linear regression, non-linear regression, and other regression algorithms. In addition, the Regression.jl package also provides a variety of solvers such as analytical solution (for linear and ridge regression) and gradient descent.

3.3 Unsupervised Learning Algorithms Developed in Julia

Unsupervised learning is a type of self-organized learning that can help find previously unknown patterns in a dataset without the need for pre-existing labels. Two of the main methods used in unsupervised learning are dimensionality reduction and cluster analysis; see Figure 5.

Figure 5: Main unsupervised learning algorithms developed in Julia

Gaussian Mixture Models

Gaussian mixture models can be viewed as a form of generalized radial basis function networks. Component functions are combined to provide a multimodal density, which can be employed to model the colors of an object to perform tasks such as real-time color-based tracking and segmentation Raja et al. (1998). These tasks may be made more robust by generating a mixture model corresponding to background colors in addition to a foreground model. Mixture models are a semiparametric alternative to nonparametric histograms Bishop (1995) (which can also be used as densities) and provide greater flexibility and precision in modeling the underlying statistics of sample data.

At present, there are many libraries that can implement Gaussian mixture models; these include packages developed with Python, such as PyBGMM and numpy-ml, and packages developed with C++, such as Armadillo. There are also some Gaussian mixture model packages for specialized fields. Bruneau et al. Bruneau et al. (2017) proposed a new Python package for nucleotide sequence clustering, which implements a Gaussian mixture model for DNA clustering. Holoien et al. Holoien et al. (2017) developed a new open-source tool, EmpiriciSN, written in Python, for performing extreme deconvolution Gaussian mixture modeling (XDGMM).

To the best of the author’s knowledge, there is no mature Julia package for the Gaussian mixture model. GMM.jl and GmmFlow.jl can realize the Gaussian mixture model, but they are inconvenient to use. However, ScikitLearn.jl implements the popular scikit-learn interface and algorithms in Julia, and it can access approximately 150 Julia and Python models, including the Gaussian mixture model. Moreover, Srajer et al. Srajer et al. (2018) used AD tools in a Gaussian mixture model fitting algorithm.

k-means

The k-means clustering algorithm is an iterative clustering algorithm. It first randomly selects k objects as the initial clustering center, then calculates the distance between each object and each seed clustering center, and assigns each object to the nearest clustering center. Cluster centers and the objects assigned to them represent a cluster. As an unsupervised clustering algorithm, k-means is widely used because of its simplicity and effectiveness.

The k-means algorithm is a classic clustering method, and many programming languages have developed packages related to it. The third-party package scikit-learn in Python implements the k-means algorithm Pedregosa et al. (2011); Douzas et al. (2018). The Kmeans function in MATLAB can also implement a k-means algorithm Yu et al. (2012). In addition, many researchers have implemented a k-means algorithm in the C/C++ programming language.

Julia has also been used to develop a special package, Clustering.jl Zhang et al. (2019), for clustering. Clustering.jl has many functions for data clustering and clustering quality evaluation. Because Clustering.jl has comprehensive and powerful functions, this package is a good choice for k-means.

Hierarchical Clustering

Hierarchical clustering is a kind of clustering algorithm that performs clustering by calculating the similarity between data points of different categories Corpet (1988); Johnson (1967); Karypis et al. (1999). The strategy of cohesive hierarchical clustering is to first treat each object as a cluster and then merge these clusters into larger and larger clusters until all objects are in one cluster or some termination condition is satisfied.

The commonly used Python packages for hierarchical clustering are scikit-learn and scipy. Hierarchical clustering under the scikit-learn package is implemented in the sklearn.cluster method, which includes three important parameters: the number of clusters, the connection method and connection measurement options Pedregosa et al. (2011). scipy implements hierarchical clustering with the scipy.cluster method Jaeger et al. (2014). In addition, programming languages such as MATLAB and C/C++ can also perform hierarchical clustering Muellner (2013).

The package QuickShiftClustering.jl Kabzan et al. (2019), written using Julia, can realize hierarchical clustering algorithms. This package is very simple to use. It provides three functions: clustering matrix data, clustering labels and creating hierarchical links to achieve hierarchical clustering Datta (2010).

Bi-Clustering

Bi-Clustering algorithm is based on traditional clustering. Its basic idea is to cluster rows and columns of matrices through traditional clustering, and then merge the clustering results. Bi-Clustering algorithm solves the bottleneck problem of traditional clustering in high-dimensional data. Traditional clustering can only search for global information, but cannot search for local information. To search for local information better in the data matrix, researchers put forward the concept of double clustering.

The package scikit-learn can implement bi-clustering, and the implementation module is sklearn.cluster.bicluster. At present, bi-clustering is mainly applied to highthroughput detection technologies such as gene chips and DNA microarrays.

The Julia language is also used to develop packages that implement bi-clustering. For example, Kpax3 is a Bayesian method for multi-cluster multi-sequence alignment. Bezanson et al. Voulgaris (July 30, 2016) used a Bayesian dual clustering model, which extended and improved the model originally introduced by Pessia et al. Upadhyay et al. (2016). They wrote the kpax3.jl library package in Julia and the output contains multiple text files containing a cluster of rows and columns of the input dataset.

Principal Component Analysis (PCA)

PCA is a method of statistical analysis and a simplified data set. It uses an orthogonal transformation to linearly transform observations of a series of possibly related variables and then project them into a series of linearly uncorrelated variables. These uncorrelated variables are called principal components. PCA is often used to reduce the dimensionality of a data set while maintaining the features that have the largest variance contribution in the data set.

Python is the most frequently used language for developing PCA algorithms. The scikit-learn package has developed a class, sklearn.decomposition.PCA 9, to implement PCA algorithms in the sklearn.decomposition module. Generally, the PCA class does not need to adjust parameters very much but needs to specify the target dimension or the variance of the principal components after dimensionality reduction. In addition, many researchers have developed related application packages using the C++ programming language. These include the ALGLIB 10 package and the class cv :: PCA 11 in OpenCV.

To the best of the author’s knowledge, there is no mature Julia package for PCA. However, MultivariateStats.jl 12 is a Julia package for multivariate statistics and data analysis. This package defines a PCA type to represent a PCA model and provides a set of methods to access properties.

Independent Component Analysis (ICA)

ICA is a new signal processing technology developed in recent years. The ICA method is based on mutual statistical independence between sources. Compared with the traditional filtering method and the accumulative averaging method, ICA does almost no damage to the details of other signals while eliminating noise, and the denoising performance is often much better than the traditional filtering method. Moreover, in contrast to traditional signal separation methods based on feature analysis, such as singular value decomposition (SVD) and PCA, ICA is an analysis method based on higher-order statistical characteristics. In many applications, the analysis of higher-order statistical characteristics is more practical.

Python is the most frequently used language in developing ICA algorithms. The scikit-learn package has developed a class, FastICA 13, to implement ICA algorithms in the sklearn.decomposition module. In addition, Brian Moore 14 developed a PCA and ICA Package using the MATLAB programming language. The PCA and ICA algorithms are implemented as functions in this package, and it includes multiple examples to demonstrate their usage.

To the best of the author’s knowledge, ICA does not have a mature software package developed in the Julia language. However, MultivariateStats.jl 12, like a Julia package for multivariate statistics and data analysis, defines an ICA type representing the ICA model and provides a set of methods to access the attributes.

3.4 Other Main Algorithms

In addition to supervised learning algorithms and unsupervised learning algorithms, machine learning algorithms include a class of algorithms that are more complex and cannot be categorized into a specific category. For example, artificial neural networks can implement supervised learning, unsupervised learning, reinforcement learning, and self-learning. Deep learning algorithms are based on artificial neural network algorithms and can perform supervised learning, unsupervised learning, and semisupervised learning. Extreme learning machines were proposed for supervised learning algorithms but were extended to unsupervised learning in subsequent developments.

Deep Learning

Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction LeCun et al. (2015). Several deep learning frameworks, such as the depth neural network, the convolutional neural network, the depth confidence network and the recursive neural network, have been applied to computer vision, speech recognition, natural language processing, image recognition, and bioinformatics and have achieved excellent results.

It has been seven years since the birth of deep learning algorithms. Many researchers have improved and developed deep learning algorithms. Python is the most frequently used language in developing deep learning algorithms. For example, PyTorch Matthias and Jan Eric (2019); Jonathan et al. (2019) and ALiPy Ying-Peng et al. (2019) are Python packages with many deep learning algorithms. Moreover, Tang et al. developed GCNv2 Jiexiong et al. (2019) using C++ and Python, Huang et al. wrote Mask Scoring R-CNN Zhaojin et al. (2019) using Python, Hanson and Frazier-Logue compared the dropout Noah and Stephen José (2018) algorithm with the SDR Stephen and Hanson (1990) algorithm, and Luo et al. Liangchen et al. (2019) proposed and used Python to write AdaBound (a new adaptive optimization algorithm).

Julia has also been used to develop various deep learning algorithms. For example, algorithmic differentiation (AD) allows the exact computation of derivatives given only an implementation of an objective function, and Srajer et al. Srajer et al. (2018) wrote an AD tool and used it in a hand-tracking algorithm.

Augmentor is a software package available in both Python and Julia that provides a high-level API for the expansion of image data using a stochastic, pipeline-based approach that effectively allows images to be sampled from a distribution of augmented images at runtime Marcus D. et al. (2017). To demonstrate the API and to highlight the effectiveness of augmentation on a well-known dataset, a short experiment was performed. In the experiment, the package is used on a CNN Krizhevsky et al. (2017).

MXNet.jl Yuren et al. (2018), Knet.jl 15, Flux.jl 16, and TensorFlow.jl 17 are deep learning frameworks with both efficiency and flexibility. At its core, MXNet.jl contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. MXNet.jl is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

Artificial Neural Networks

A neural network is a feedforward network consisting of nodes (”neurons”), each side of which has weights. These allow the network to form a mapping between the input and output Ditzler et al. (2015). Each neuron that receives input from a previous neuron consists of the following components: the activation, a threshold, the time at which the newly activated activation function is calculated and the output function of the activation output.

At present, the framework of a neural network model is usually developed in C++ or Python. DLL is a machine learning framework written in C++ Wicht et al. (2018). It supports a variety of neural network layers and standard backpropagation algorithms. It can train artificial neural networks and CNNs and support basic learning options such as momentum and weight attenuation. scikit-learn, a machine learning library based on Python, also supports neural network models Pedregosa et al. (2011).

Employing the Julia language, Diffiqflux.jl Chris et al. (2019) is a package that integrates neural networks and differential equations. Rackauckas et al. describe differential equations from the perspective of data science and discuss the complementarity between machine learning models and differential equations. These authors demonstrated the ability to combine DifferentialEquations.jl 18 -defined differential equations into Flux-defined neural networks. Backpropneuralnet.jl Srajer et al. (2018) is an easy-to-use neural network package. We use ”train (network, input, output)” to train the network and ”net Eval (network, input)” to evaluate the input.

Extreme Learning Machine

Extreme Learning Machine (ELM) Huang et al. (2004) is a variant of Single Hidden Layer Feedforward Networks (SLFNs). Because its weight is not adjusted iteratively, it deviates greatly. This greatly improves the speed of the neural network.

The basic algorithm of ELM and Multi-Layer Kasun et al. (2013)/Hierarchical Tang et al. (2015) ELM have been implemented in HP-ELM. Meanwhile, C/C++, MATLAB, Python and JAVA versions are provided. HP-ELM includes GPU acceleration and memory optimization, which is suitable for large data processing. HP-ELM supports LOO (Leave One Out) and k-fold cross-validation to dynamically select the number of hidden layer nodes. The available feature maps include linear function, Sigmoid function, hyperbolic sinusoidal function, and three radial basis functions.

According to ELM, parameters of hidden nodes or neurons are not only independent of training data but also independent of each other. Standard feedforward neural networks with hidden nodes have universal approximation and separation capabilities. These hidden nodes and their related maps are terminologically elm random nodes, ELM random neurons or ELM random features. Unlike traditional learning methods, which need to see training data before generating hidden nodes or neuron parameters, ELM could generate hidden nodes or neuron parameters randomly before seeing training data. Elm.jl Ouyang et al. (2020) is an easy-to-use extreme learning machine package.

3.5 List of Commonly Used Julia Packages

We summarize the commonly used Julia language packages and the machine learning algorithms that these packages primarily support; see the investigation in Table 1.

4 Julia in Machine Learning: Applications

4.1 Overview

Machine learning is one of the fastest-growing technical fields nowadays. It is a cross-cutting field of statistics and computer science Jordan and Mitchell (2015); Deo (2015); Domingos (2012). Machine learning specializes in how computers simulate or implement human learning behaviors. By acquiring new knowledge and skills, the existing knowledge structure is reorganized to improve its performance.

Julia as a programming language with the rise of machine learning has corresponding algorithmic library packages in most machine learning applications. In the following, we summarize many studies about Julia’s application in machine learning. As shown in Figure 6, the current application of Julia programming language in machine learning mainly focuses on the Internet of Things (IoT), computer vision, autonomous driving, pattern recognition, etc.

Figure 6: Major applications of machine learning using Julia language

4.2 Analysis of IoT Data

The IoT, also called the Internet of Everything or the Industrial Internet, is a new technology paradigm envisioned as a global network of machines and devices capable of interacting with each other Lee and Lee (2015). The application of the IoT in industry, agriculture, the environment, transportation, logistics, security, and other infrastructure fields effectively promotes the intelligent development of these areas and more rationally uses and allocates limited resources, thus improving the efficiency of these fields Gubbi et al. (2013); Atzori et al. (2010); Mei et al. (2019). Machine learning has brought enormous development opportunities for the IoT and has a significant impact on existing industries Mohammadi et al. (2018); Mahdavinejad et al. (2018).

Invenia Technical Computing used the Julia language to expand its energy intelligence system 19. They optimized the entire North American grid and used the energy intelligent system (EIS) and various signals to directly improve the day-ahead planning process. They used the latest research in machine learning, complex systems, risk analysis, and energy systems. In addition, Julia provided Invenia Technical Computing with much-needed versatility in terms of programming style, parallelism, and language interoperability 19.

Fugro Roames engineers used the Julia language to implement machine learning algorithms to identify network faults and potential faults, achieving a 100-fold increase in speed. Protecting the grid means ensuring that all power lines, poles, and wires are in good repair, which used to be a laborious manual task that required thousands of hours to travel along the power line. Fugro Roames engineers have developed a more effective way to identify threats to wires, poles, and conductors. Using a combination of LiDAR and high-resolution aerial photography, they created a detailed three-dimensional map of the physical conditions of the grid and possible intrusions. Then, they used machine learning algorithms to identify points on the network that have failed or are at risk of failure 20.

4.3 Computer Vision

Computer vision is a simulation of biological vision using computers and related equipment. Its main task is to obtain the three-dimensional information of the corresponding scene by processing collected pictures or videos. Computer vision includes image processing and pattern recognition. In addition, it also includes geometric modeling and recognition processes. The realization of image understanding is the ultimate goal of computer vision. Machine learning is developing, and computer vision research has gradually shifted from traditional models to deep learning models represented by convolutional neural networks (CNNs) and deep Boltzmann machines.

At present, computer vision technology is applied in the fields of biological and medical image analysis Grys et al. (2017), urban streetscapes Seiferling et al. (2017); Naik et al. (2017), rock-type identification Patel and Chatterjee (2016), automated pavement distress detection and classification Gopalakrishnan et al. (2017), structural damage detection in buildings Cha et al. (2017), and other fields. The development language used in current research is usually Python or another mature language. However, in the face of large-scale data, the Julia language has inherent advantages in high-performance processing. Therefore, many scholars and engineers use Julia to develop packages for the realization of computer vision functions. The Metalhead.jl 1 package provides computer vision models that run on top of the Flux machine learning library. The package ImageProjectiveGeometry.jl 2 is intended as a starting point for the development of a library of projective geometry functions for computer vision in Julia. Currently, the package consists of a number of components that could ultimately be separated into individual packages or added to other existing packages.

4.4 Natural Language Processing (NLP)

NLP employs computational techniques for the purpose of learning, understanding, and producing human language content Hirschberg and Manning (2015). It is an important research direction in the field of computer science and artificial intelligence. Modern NLP algorithms are based on machine learning algorithms, especially statistical machine learning algorithms. Many different machine learning algorithms have been applied to NLP tasks, the most representative of which are deep learning algorithms exemplified by CNN Poria et al. (2016); Young et al. (2018); Gimenez et al. (2020); Liu et al. (2017).

At present, the main research task of NLP is to investigate the characteristics of human language and establish the cognitive mechanism of understanding and generating language. In addition, new practical applications for processing human language through computer intelligence have been developed. Many researchers and engineers have developed practical application tools or software packages using the Julia language. LightNLP.jl 3 is a lightweight NLP toolkit for the Julia language. However, there are currently no stable library packages for NLP developed in the Julia language.

4.5 Autonomous Driving

Machine learning is widely used in autonomous driving, and it mainly focuses on the environmental perception and behavioral decision-making of autonomous vehicles. The application of machine learning in environmental perception belongs to the category of supervised learning. When performing object recognition on images obtained from the surrounding environment of a vehicle, a large number of images with solid objects are required as training data, and then deep learning methods can identify objects from the new images Kebria et al. (2020); Parmar et al. (2019); Ouyang et al. (2020); Liu et al. (2019). The application of machine learning in behavioral decision-making generally involves reinforcement learning. Autonomous vehicles need to interact with the environment, and reinforcement learning learns the mapping relationship between the environment and behavior that interacts with the environment from a large amount of sample data. Thus, whenever an autonomous vehicle perceives the environment, it can act intelligently Cuenca et al. (2019); Desjardins and Chaib-draa (2011).

To the best of the author’s knowledge, there are no software packages or solutions specially developed in Julia for self-driving cars. However, the machine learning algorithms used in self-driving cars are currently implemented by researchers in the Julia language. The amount of data obtained by self-driving cars is huge, and the processing is complex, but self-driving cars have strict requirements for data processing time. High-level languages such as Python and MATLAB are not as efficient in computing as the Julia language, which was specifically developed for high-performance computing. Therefore, we believe that Julia has strong competitiveness as a programming language for autonomous vehicle platforms.

4.6 Graph Analytics

Graph analytics is a rapidly developing research field. It combines graph-theoretic, statistics and database technology to model, store, retrieve and analyze graph structure data. Samsi Samsi et al. (2017) used subgraph isomorphism to solve the previous scalability difficulties in machine learning, high-performance computing, and visual analysis. The serial implementations of C++, Python, and Pandas and MATLAB are implemented, and their single-thread performance is measured.

In Julia, LightGraphs.jl is currently the most comprehensive library for graph analysis 4. LightGraphs.jl provides a set of simple, concrete graphical implementations (including undirected and directed graphs) and APIs for developing more complex graphical implementations under the AbstractGraph type.

4.7 Signal Processing

The signal processing in communications is the cornerstones of electrical engineering research and other related fields Uengtrakul et al. (2014); Gideon et al. (2017). Python has natural advantages in analyzing complex signal data due to its numerous packages. In addition, the actually collected signals need to be processed before they can be used for analysis. MATLAB provides many signal processing toolboxes, such as spectrum analysis toolbox, waveform viewer, filter design toolbox. Therefore, MATLAB is also a practical tool for signal data processing.

Current and emerging means of communication increasingly rely on the ability to extract patterns from large data sets to support reasoning and decision-making using machine learning algorithms. This calls the use of the Julia language. For example, Srivastava Prakalp et al. Srivastava et al. (2018) designed an end-to-end programmable hybrid signal accelerator, PROMISE, for machine learning algorithms. PROMISE can use machine learning algorithms described by Julia and generate PROMISE code. PROMISE can combine multiple signals and accelerate machine learning algorithms.

4.8 Pattern Recognition

Pattern recognition is the automatic processing and interpretation of patterns by means of a computer using mathematical technology Bishop (2006). With the development of computer technology, it is possible for humans to study the complex process of information-processing, an important form of which is the recognition of the environment and objects by living organisms. The main research directions of pattern recognition are image processing and computer vision, speech information processing, medical diagnosis and biometric authentication technology Milewski and Govindaraju (2008). The mechanism of human pattern recognition and effective calculation methods are studied.

Diabetes is a serious health problem, leading to many long-term health problems, including renal, cardiovascular and neurological complications. Machine learning algorithms have been applied to ICU settings, but they have never been applied to the diabetic population in the ICU. All model fitting is performed using packages in the Julia programming language. The binomial logistic regression model is fitted with the GLM.jl package. Model creation uses 70% of the sample data, and the remaining 30% of the samples are used for model validation. Using binomial logistic regression, all variables are used for binary analysis to correlate with mortality results. The significant variables whose p value is less than 0.05 are combined for multivariate analysis Anand et al. (2018).

5 Julia in Machine Learning: Open Issues

5.1 Overview

Since its release, Julia has addressed the ”pain points” of many current programming languages and has been affirmed by various disciplines. However, with the promotion of the Julia language and the steady increase in the number of users, it also faces many open issues; see Figure 7.

Figure 7: Open issues of Julia language

5.2 A Developing Language

Julia is a young and developing language. Although Julia has developed rapidly, its influence is far less than that of other popular programming languages. After several versions of updates, Julia has become relatively stable, but there are still many problems to be solved. Julia’s grammar has changed considerably, and although these changes are for the sake of performance or ease of expression, these differences also make it difficult for different programs to work together; Julia itself has the characteristics of parallel computing, which is mainly reflected in the levels of processes, and multithreaded parallelism has been experimented with in version 1.0 and previous versions. One of Julia’s great tools is speed, but to write efficient code, one needs to transform the method of thinking in programming and not just copy code into Julia. For people who have just come into contact with Julia, the ease of use can also cause them to ignore this problem and ultimately lead to unsatisfactory code efficiency.

5.3 Lack of Stable Development Tools

Currently, the commonly used editors and IDEs for the Julia language include 1) Juno (Atom Plugin), 2) Visual Studio Code (VS Code Extension), 3) Jupyter (Jupyter kernel), and 4) Jet Brains (IntelliJ IDEA Plugin). According to 5, Juno is currently the most popular editor. These editors and IDEs are extensions based on third-party platforms, which can quickly build development environments for Julia in its early stages of development, but in the long run, this is not a wise approach. Users need to configure Julia initially, but the final experience is not satisfactory. Programming languages such as MATLAB, Python and C/C++ each have their own IDE, which integrates the functions of code writing, analysis, compilation and debugging. Although editors and IDEs have achieved many excellent functions, it is very important to have a complete and Julia-specific IDE.

5.4 Interfacing with Other Languages

In the process of using Julia for development, although most of the code can be written in Julia, many high-quality, mature numerical computing libraries have been written in C and Fortran. To facilitate the use of existing code, Julia should also make it easy and effective to call C/C++ and Fortran functions. In the field of machine learning, Python has been used to write a large quantity of excellent code. If one desires to transplant code into Julia in a short time for maintenance, simple calls are necessary, which can greatly reduce the learning curve.

Currently, PyCall and Call are used in Julia to invoke these existing libraries, but Julia still needs a more concise and general invocation method. More important is finding a method to ensure that the parts of these calls can maintain the original execution efficiency or the execution efficiency of the native Julia language. At the same time, it is also very important to embed Julia’s code in other languages, which would not only popularize the use of Julia more quickly but also combine the characteristics of Julia to enable researchers to accomplish tasks more quickly.

5.5 Limited Number of Third-party Packages

For good programming languages, the quantity and quality of third-party libraries are very important. For Python, there are 194,934 projects registered in PyPI, while the number of Julia third-party libraries registered in Julia Observer is approximately 2,600 Perkel (2019). The number of third-party libraries in Julia is increasing, but there are still relatively few compared with other programming languages, and there may not be suitable libraries available in some unpopular areas.

Because Julia is still in the early stage of development, version updates are faster, and the interface and grammar of the program are greatly changed in each version upgrade. After the release of Julia 1.0, the Julia language has become more mature and stable than in the past. However, many excellent third-party machine learning libraries were written before the release of Julia 1.0 and failed to update to the new version in time. Users need to carefully evaluate whether a third-party library has been updated to the latest version of Julia to ensure its normal use. In addition, Julia is designed for parallel programming, but there are not many third-party libraries for parallel programming. Currently, the more commonly used third-party packages of Julia are CUDAnative.jl, CuArrays.jl, and juliaDB.jl. However, many functions in these packages are still in the testing stage.

Although Julia libraries are not as rich as those of Python, the prospects for development are optimistic. Officials have provided statistical trends in the number of repositories. Many scholars and technicians are committed to improving the Julia library. Rong Hongbo et al. used Julia, Intel MKL and the SPMP library to implement Sparso Rong et al. (2016), which is a sparse linear algebra context-driven optimization tool that can accelerate machine learning algorithms. Plumb Gregory et al. compiled a library package for fast Fourier analysis using Julia, which made it easier for fast Fourier analysis to be employed in statistical machine learning algorithms Plumb et al. (2015).

6 Conclusions

This paper has systematically investigated the development status of the Julia language in the field of machine learning, including machine learning algorithms written in Julia, the application of the Julia language in machine learning and challenges faced by Julia. We find that: (1) Machine learning algorithms written in Julia are mainly supervised learning algorithms, and there are fewer algorithms for unsupervised learning. (2) The Julia language is widely used in seven popular machine learning research topics: pattern recognition, NLP, IoT data analysis, computer vision, autonomous driving, graph analytics, and signal processing. (3) There are far fewer available application packages than there are for other high-level languages, such as Python, which is Julia’s greatest challenge. The work of this paper provides a reference for Julia’s further development in the field of machine learning. We believe that with the gradual maturing of the Julia language and the development of related third-party packages, the Julia language will be a highly competitive programming language for machine learning.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This research was jointly supported by the National Natural Science Foundation of China (Grant Numbers: 11602235), and the Fundamental Research Funds for China Central Universities (Grant Numbers: 2652018091). The authors would like to thank the editor and the reviewers for their valuable comments.

Julia Packages Ref. Supervised Learning Unsupervised Learning Other
Bayesian
Model
kNN
Random
Forest
SVM
Regression
Analysis
GMM k-means
Hierarchical
Clustering
Bi-
Clustering
PCA ICA
Deep
Learning
ANN ELM
ForneyLab.jl Cox et al. (2019)
NearestNeighbors.jl 7
DecisionTree.jl Seiferling et al. (2017)
SVM.jl Kebria et al. (2020)
MLJ.jl Parmar et al. (2019)
LIBSVM.jl 8
ScikitLearn.jl Gwak et al. (2019)
Regression.jl Shan et al. (2019)
EmpiricalRisk.jl Arnold et al. (2019)
Clustering.jl Zhang et al. (2019)
QuickShiftClustering.jl Kabzan et al. (2019)
kpax3.jl Voulgaris (July 30, 2016)
MultivariateStats.jl 12
MXNet.jl Yuren et al. (2018)
Knet.jl 15
Flux.jl 16
TensorFlow.jl 17
Diffiqflux.jl Chris et al. (2019)
DifferentialEquations.jl 18
Backpropneuralnet.jl Srajer et al. (2018)
Elm.jl Ouyang et al. (2020)
Table 1: Commonly used Julia language packages

Footnotes

  1. journal: Elsevier

References

  1. Web Page. External Links: Link Cited by: §4.3.
  2. Web Page. External Links: Link Cited by: §4.3.
  3. Web Page. External Links: Link Cited by: §4.4.
  4. Web Page. External Links: Link Cited by: §4.6.
  5. Web Page. External Links: Link Cited by: §5.3.
  6. Web Page. External Links: Link Cited by: Figure 2.
  7. Web Page. External Links: Link Cited by: §3.2, Table 1.
  8. Web Page. External Links: Link Cited by: §3.2, Table 1.
  9. Web Page. External Links: Link Cited by: §3.3.
  10. Web Page. External Links: Link Cited by: §3.3.
  11. Web Page. External Links: Link Cited by: §3.3.
  12. Web Page. External Links: Link Cited by: §3.3, §3.3, Table 1.
  13. Web Page. External Links: Link Cited by: §3.3.
  14. Web Page. External Links: Link Cited by: §3.3.
  15. Web Page. External Links: Link Cited by: §3.4, Table 1.
  16. Web Page. External Links: Link Cited by: §3.4, Table 1.
  17. Web Page. External Links: Link Cited by: §3.4, Table 1.
  18. Web Page. External Links: Link Cited by: §3.4, Table 1.
  19. Web Page. External Links: Link Cited by: §4.2.
  20. Web Page. External Links: Link Cited by: §4.2.
  21. Machine learning for neuroirnaging with scikit-learn. Frontiers in Neuroinformatics 8. External Links: Document Cited by: §3.2.
  22. Predicting mortality in diabetic icu patients using machine learning and severity indices. AMIA Jt Summits Transl Sci Proc 2017, pp. 310–319. Cited by: §4.8.
  23. A survey on 3d object detection methods for autonomous driving applications. Ieee Transactions on Intelligent Transportation Systems 20 (10), pp. 3782–3795. External Links: Document Cited by: §3.2, Table 1.
  24. The internet of things: a survey. Computer Networks 54 (15), pp. 2787–2805. External Links: Document Cited by: §4.2.
  25. Hyperopt: a python library for model selection and hyperparameter optimization. 8 (1). Cited by: §3.2.
  26. Effective extensible programming: unleashing julia on gpus. IEEE Transactions on Parallel and Distributed Systems 30 (4), pp. 827–841. Cited by: §2.
  27. Julia: a fresh approach to numerical computing. Siam Review 59 (1), pp. 65–98. External Links: Document Cited by: §1.
  28. Neural networks for pattern recognition. Book, Oxford University Press. Cited by: §3.3.
  29. Pattern recognition and machine learning. Book, springer. Cited by: §4.8.
  30. Random forests machine learning. Mach. Learn 45, pp. 5–32. External Links: Document Cited by: §3.2.
  31. A clustering package for nucleotide sequences using laplacian eigenmaps and gaussian mixture model(article). COMPUTERS IN BIOLOGY AND MEDICINE 93, pp. 66–74. External Links: Document Cited by: §3.3.
  32. Output-only computer vision based damage detection using phase-based optical flow and unscented kalman filters. Engineering Structures 132, pp. 300–313. External Links: Document Cited by: §4.3.
  33. LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST) 2 (3), pp. 27. Cited by: §3.2.
  34. DiffEqFlux.jl - a julia library for neural differential equations. Statistics. Cited by: §3.4, Table 1.
  35. MULTIPLE sequence alignment with hierarchical-clustering. Nucleic Acids Research 16 (22), pp. 10881–10890. External Links: Document Cited by: §3.3.
  36. A factor graph approach to automated design of bayesian signal processing algorithms. International Journal of Approximate Reasoning 104, pp. 185–204. External Links: Document Cited by: §3.2, Table 1.
  37. Autonomous driving in roundabout maneuvers using reinforcement learning with q-learning. Electronics 8 (12), pp. 13. External Links: Document Cited by: §4.5.
  38. A design proposal for gen: probabilistic programming with fast custom inference via code generation. pp. 57. Cited by: §3.2.
  39. Hierarchical stellar clusters in molecular clouds. Book Section In Star Clusters: Basic Galactic Building Blocks Throughout Time and Space, R. DeGrijs and J. R. D. Lepine (Eds.), IAU Symposium Proceedings Series, pp. 377–379. External Links: Document Cited by: §3.3.
  40. Orange: from experimental machine learning to interactive data mining. Book Section In Knowledge Discovery in Databases: Pkdd 2004, Proceedings, J. F. Boulicaut, F. Esposito, F. Giannotti and D. Pedreschi (Eds.), Lecture Notes in Artificial Intelligence, Vol. 3202, pp. 537–539. Cited by: §3.2.
  41. Orange: data mining toolbox in python. Journal of Machine Learning Research 14, pp. 2349–2353. Cited by: §3.2.
  42. Machine learning in medicine. Circulation 132 (20), pp. 1920–1930. External Links: Document Cited by: §1, §4.1.
  43. Cooperative adaptive cruise control: a reinforcement learning approach. Ieee Transactions on Intelligent Transportation Systems 12 (4), pp. 1248–1260. External Links: Document Cited by: §4.5.
  44. Distributed mcmc inference in dirichlet process mixture models using julia. pp. 525. Cited by: §1, §1, §3.2.
  45. Fizzy: feature subset selection for metagenomics. 16 (1). Cited by: §3.4.
  46. A few useful things to know about machine learning. Communications of the Acm 55 (10), pp. 78–87. External Links: Document Cited by: §1, §4.1.
  47. Improving imbalanced learning through a heuristic oversampling method based on k-means and smote. Information Sciences 465, pp. 1–20. External Links: Document Cited by: §3.3.
  48. Echo state network-based radio signal strength prediction for wireless communication in northern namibia. Iet Communications 11 (12), pp. 1920–1926. External Links: Document Cited by: §4.7.
  49. Semantic-based padding in convolutional neural networks for improving the performance in natural language processing. a case of study in sentiment analysis. Neurocomputing 378, pp. 315–323. External Links: Document Cited by: §4.4.
  50. Deep convolutional neural networks with transfer learning for computer vision-based data-driven pavement distress detection. Construction and Building Materials 157, pp. 322–330. External Links: Document Cited by: §4.3.
  51. Machine learning and computer vision approaches for phenotypic profiling. Journal of Cell Biology 216 (1), pp. 65–71. External Links: Document Cited by: §4.3.
  52. Internet of things (iot): a vision, architectural elements, and future directions. Future Generation Computer Systems-the International Journal of Escience 29 (7), pp. 1645–1660. External Links: Document Cited by: §4.2.
  53. A review of intelligent self-driving vehicle software research. Ksii Transactions on Internet and Information Systems 13 (11), pp. 5299–5320. External Links: Document Cited by: §3.2, Table 1.
  54. Advances in natural language processing. Science 349 (6245), pp. 261–266. External Links: Document Cited by: §4.4.
  55. “Random decision forests,” in document analysis and recognition. Conference Proceedings In Proceedings of the Third International Conference on (Montreal, QC: IEEE), pp. 278–282. Cited by: §3.2.
  56. EmpiriciSN: re-sampling observed supernova/host galaxy populations using an xd gaussian mixture model. Astronomical Journal 153 (6). External Links: Document Cited by: §3.3.
  57. Extreme learning machine: a new learning scheme of feedforward neural networks. Book Section In 2004 Ieee International Joint Conference on Neural Networks, Vols 1-4, Proceedings, IEEE International Joint Conference on Neural Networks (IJCNN), pp. 985–990. Cited by: §3.4.
  58. Designing an efficient parallel spectral clustering algorithm on multi-core processors in julia. Journal of Parallel and Distributed Computing 138, pp. 211–221. External Links: Document Cited by: §2.
  59. PyGCluster, a novel hierarchical clustering approach. Bioinformatics 30 (6), pp. 896–898. External Links: Document Cited by: §3.3.
  60. Bayesian modeling with gaussian processes using the matlab toolbox gpstuff (v3.3). STATISTICS. Cited by: §3.2.
  61. GCNv2: efficient correspondence prediction for real-time slam. IEEE Robotics and Automation Letters 4, pp. 3505–3512. External Links: Document Cited by: §3.4.
  62. Hierarchical clustering schemes. Psychometrika 32 (3), pp. 241–54. External Links: Document Cited by: §3.3.
  63. Lingvo: a modular and scalable framework for sequence-to-sequence modeling. Statistics. Cited by: §3.4.
  64. Machine learning: trends, perspectives, and prospects. Science 349 (6245), pp. 255–260. External Links: Document Cited by: §1, §4.1.
  65. An overview of free software tools for general data mining. Book, 2014 37th International Convention on Information and Communication Technology, Electronics and Microelectronics. Cited by: §3.2.
  66. Learning-based model predictive control for autonomous racing. Ieee Robotics and Automation Letters 4 (4), pp. 3363–3370. External Links: Document Cited by: §3.3, Table 1.
  67. Chameleon: hierarchical clustering using dynamic modeling. Computer 32 (8), pp. 68–+. External Links: Document Cited by: §3.3.
  68. Representational learning with elms for big data. Ieee Intelligent Systems 28 (6), pp. 31–34. Cited by: §3.4.
  69. Deep imitation learning for autonomous vehicles based on convolutional neural networks. Ieee-Caa Journal of Automatica Sinica 7 (1), pp. 82–95. External Links: Document Cited by: §3.2, §4.5, Table 1.
  70. FastBDT: a speed-optimized and cache-friendly implementation of stochastic gradient-boosted decision trees for multivariate classification [arxiv]. arXiv preprint arXiv:1209.5145, pp. 16. Cited by: §3.2.
  71. ImageNet classification with deep convolutional neural networks. Communications of the Acm 60 (6), pp. 84–90. External Links: Document Cited by: §3.4.
  72. LLVM: a compilation framework for lifelong program analysis and transformation. pp. 75–86. External Links: Document Cited by: §2.
  73. Deep learning. Nature 521 (7553), pp. 436–444. External Links: Document Cited by: §1, §3.4.
  74. The internet of things (iot): applications, investments, and challenges for enterprises. Business Horizons 58 (4), pp. 431–440. External Links: Document Cited by: §4.2.
  75. Adaptive gradient methods with dynamic bound of learning rate. Statistics. Cited by: §3.4.
  76. Deep representation learning for road detection using siamese network. Multimedia Tools and Applications 78 (17), pp. 24269–24283. External Links: Document Cited by: §4.5.
  77. A survey of deep neural network architectures and their applications. Neurocomputing 234, pp. 11–26. External Links: Document Cited by: §4.4.
  78. BayesPy: variational bayesian inference in python(article). Journal of Machine Learning Research 17. Cited by: §3.2.
  79. Machine learning for internet of things data analysis: a survey. Digital Communications and Networks 4 (3), pp. 161–175. Cited by: §4.2.
  80. Augmentor: an image augmentation library for machine learning. Statistics. Cited by: §3.4.
  81. Fast graph representation learning with pytorch geometric. Statistics 3. Cited by: §3.4.
  82. A survey of internet of things (iot) for geo-hazards prevention: applications, technologies, and challenges. IEEE Internet of Things Journal, pp. 1–1. External Links: Document Cited by: §4.2.
  83. ABrox a-user-friendly python module for approximate bayesian computation with a focus on model comparison. Plos One 13 (3). External Links: Document Cited by: §3.2.
  84. Binarization and cleanup of handwritten text from carbon copy medical form images. Pattern Recognition 41 (4), pp. 1308–1315. External Links: Document Cited by: §4.8.
  85. Machine learning for science: state of the art and future prospects. Science 293 (5537), pp. 2051–+. External Links: Document Cited by: §1.
  86. Deep learning for iot big data and streaming analytics: a survey. Ieee Communications Surveys And Tutorials 20 (4), pp. 2923–2960. External Links: Document Cited by: §4.2.
  87. Fastcluster: fast hierarchical, agglomerative clustering routines for r and python. Journal of Statistical Software 53 (9), pp. 1–18. Cited by: §3.3.
  88. Computer vision uncovers predictors of physical urban change. Proceedings of the National Academy of Sciences of the United States of America 114 (29), pp. 7571–7576. External Links: Document Cited by: §4.3.
  89. Dropout is a special case of the stochastic delta rule: faster and more accurate deep learning. Statistics. Cited by: §3.4.
  90. Deep cnn-based real-time traffic light detector for self-driving vehicles. Ieee Transactions on Mobile Computing 19 (2), pp. 300–313. External Links: Document Cited by: §3.4, §4.5, Table 1.
  91. DeepRange: deep-learning-based object detection and ranging in autonomous driving. Iet Intelligent Transport Systems 13 (8), pp. 1256–1264. External Links: Document Cited by: §3.2, §4.5, Table 1.
  92. Computer vision-based limestone rock-type classification using probabilistic neural network. Geoscience Frontiers 7 (1), pp. 53–60. External Links: Document Cited by: §4.3.
  93. PyMC: bayesian stochastic modelling in python. Journal of Statistical Software 35 (4), pp. 1–81. Cited by: §3.2.
  94. Scikit-learn: machine learning in python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: §3.2, §3.3, §3.3, §3.4.
  95. JULIA: come for the syntax, stay for the speed. Nature 572 (7767), pp. 141–142. External Links: Document Cited by: §1, §5.5.
  96. SnFFT: a julia toolkit for fourier analysis of functions over permutations. Journal of Machine Learning Research 16, pp. 3469–3473. Cited by: §5.5.
  97. Aspect extraction for opinion mining with a deep convolutional neural network. Knowledge-Based Systems 108, pp. 42–49. External Links: Document Cited by: §4.4.
  98. Segmentation and tracking using colour mixture models. Cited by: §3.3.
  99. Three pitfalls to avoid in machine learning. Nature 572 (7767), pp. 27–29. External Links: Document Cited by: §1.
  100. Sparso: context-driven optimizations of sparse linear algebra. Book, 2016 International Conference on Parallel Architecture and Compilation Techniques. External Links: Document Cited by: §5.5.
  101. Static graph challenge: subgraph isomorphism. Book Section In 2017 Ieee High Performance Extreme Computing Conference, IEEE High Performance Extreme Computing Conference. Cited by: §4.6.
  102. Green streets - quantifying and mapping urban trees with street-level imagery and computer vision. Landscape and Urban Planning 165, pp. 93–101. External Links: Document Cited by: §3.2, §4.3, Table 1.
  103. Medical imaging processing on a big data platform using python: experiences with heterogeneous and homogeneous architectures. Book Section In 2017 17th Ieee/Acm International Symposium on Cluster, Cloud and Grid Computing, IEEE-ACM International Symposium on Cluster Cloud and Grid Computing, pp. 830–837. External Links: Document Cited by: §1.
  104. Pegasos: primal estimated sub-gradient solver for svm. Mathematical Programming 127 (1), pp. 3–30. External Links: Document Cited by: §3.2.
  105. Pixel and feature level based domain adaptation for object detection in autonomous driving. Neurocomputing 367, pp. 31–38. External Links: Document Cited by: §3.2, Table 1.
  106. A benchmark of selected algorithmic differentiation tools on some problems in computer vision and machine learning. Optimization Methods and Software 33 (4-6), pp. 889–906. External Links: Document Cited by: §3.3, §3.4, §3.4, Table 1.
  107. PROMISE: an end-to-end design of a programmable mixed-signal accelerator for machine-learning algorithms. Book Section In 2018 Acm/Ieee 45th Annual International Symposium on Computer Architecture, Conference Proceedings Annual International Symposium on Computer Architecture, pp. 43–56. External Links: Document Cited by: §4.7.
  108. A stochastic version of the delta rule. Physica. Section D: Nonlinear Phenomena 42, pp. 265–272. External Links: Document Cited by: §3.4.
  109. PySSM: a python module for bayesian inference of linear gaussian state space models. Journal of Statistical Software 57. Cited by: §3.2.
  110. Extreme learning machine for multilayer perceptron. IEEE transactions on neural networks and learning systems 27 (4), pp. 809–821. Cited by: §3.4.
  111. BAMSE: bayesian model selection for tumor phylogeny inference among multiple samples. Bmc Bioinformatics 20. External Links: Document Cited by: §3.2.
  112. A cost efficient software defined radio receiver for demonstrating concepts in communication and signal processing using python and rtl-sdr. Book Section In 2014 Fourth International Conference on Digital Information and Communication Technology and It’s Applications, International Conference on Digital Information and Communication Technology and it’s Applications, pp. 394–399. Cited by: §4.7.
  113. Land use and land cover classification of liss-iii satellite image using knn and decision tree. Book, Proceedings of the 10th Indiacom - 2016 3rd International Conference on Computing for Sustainable Global Development. Cited by: §3.2, §3.3.
  114. The nature of statistical learning theory. Book, Springer science and business media. Cited by: §3.2.
  115. Julia for data science. Book, Technics Publications, LLC; First edition. Cited by: §1, §3.3, Table 1.
  116. DLL: a fast deep neural network library. Book Section In Artificial Neural Networks in Pattern Recognition, Annpr 2018, L. Pancioni, F. Schwenker and E. Trentin (Eds.), Lecture Notes in Artificial Intelligence, Vol. 11081, pp. 54–65. External Links: Document Cited by: §3.4.
  117. Commodity recommendation for users based on e-commerce data. Book, Proceedings of the 2018 2nd International Conference on Big Data Research. External Links: Document Cited by: §3.2.
  118. ALiPy: active learning in python. Statistics. Cited by: §3.4.
  119. Recent trends in deep learning based natural language processing. Ieee Computational Intelligence Magazine 13 (3), pp. 55–75. External Links: Document Cited by: §4.4.
  120. Optimized data fusion for kernel k-means clustering. Ieee Transactions on Pattern Analysis and Machine Intelligence 34 (5), pp. 1031–1039. External Links: Document Cited by: §3.3.
  121. ZOOpt: toolbox for derivative-free optimization. Statistics 3. Cited by: §3.4, Table 1.
  122. Deep learning empowered task offloading for mobile edge computing in urban informatics. Ieee Internet of Things Journal 6 (5), pp. 7635–7647. External Links: Document Cited by: §3.3, Table 1.
  123. BSMac: a matlab toolbox implementing a bayesian spatial model for brain activation and connectivity. Journal of Neuroscience Methods 204 (1), pp. 133–143. External Links: Document Cited by: §3.2.
  124. Mask scoring r-cnn. CVPR2019. Cited by: §3.4.
  125. A review and tutorial of machine learning methods for microbiome host trait prediction. Frontiers in Genetics 10. External Links: Document Cited by: §3.2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
412583
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description