1 Introduction
###### Abstract

We introduce a new method of performing high dimensional discriminant analysis, which we call multiDA. We achieve this by constructing a hybrid model that seamlessly integrates a multiclass diagonal discriminant analysis model and feature selection components. Our feature selection component naturally simplifies to weights which are simple functions of likelihood ratio statistics allowing natural comparisons with traditional hypothesis testing methods. We provide heuristic arguments suggesting desirable asymptotic properties of our algorithm with regards to feature selection. We compare our method with several other approaches, showing marked improvements in regard to prediction accuracy, interpretability of chosen features, and algorithm run time. We demonstrate such strengths of our model by showing strong classification performance on publicly available high dimensional datasets, as well as through multiple simulation studies. We make an R package available implementing our approach.

Diagonal Discriminant Analysis with Feature Selection for High Dimensional Data

Sarah E. Romanes, John T. Ormerod, and Jean Y.H. Yang

School of Mathematics and Statistics, University of Sydney, Sydney 2006, Australia

ARC Centre of Excellence for Mathematical & Statistical Frontiers,

The University of Melbourne, Parkville VIC 3010, Australia

The Judith and David Coffey Life Lab, Charles Perkins Centre,

University of Sydney, Sydney 2006, Australia

Keywords: Multiple hypothesis testing; classification; likelihood ratio tests; asymptotic properties of hypothesis tests; latent variables; feature selection.

## 1 Introduction

Classification problems involving high dimensional data are extensive in many fields such as finance, marketing, and bioinformatics. Unique challenges with high dimensional datasets are numerous and well known, with many classifiers built under traditional low dimensional frameworks simply unable to be applied to such high dimensional data. Discriminant Analysis (DA) is one such example (Fisher, 1936). DA classifiers work by assuming the distribution of the features is strictly Gaussian at the class level, and assign a particular point to the class label which minimises the Mahalanobis (for linear discriminant analysis (LDA)) distance between that point and the mean of the multivariate normal corresponding to such class. Although extraordinary simple and easy to use in low dimensional settings, DA is well known to be unusable in high dimensions due to the maximum likelihood estimate of the corresponding covariance matrix being singular when the number of features is greater than that of the observations.

One alternative to DA to permit use in high dimensions is known diagonal DA (also known as näive Bayes) (Friedman, 1989; Dudoit et al., 2002; Bickel and Levina, 2004), which makes the simplifying assumption that all features are independent. As such the resultant covariance matrix is diagonal - circumventing aforementioned issues associated with singular covariance matrix estimates. While this assumption is strong, Bickel and Levina (2004) show that the classification error for diagonal DA can still work well in high dimensional settings even when there is dependence between predictors. Another significant challenge in high dimensional frameworks is associated with feature selection, not only to improve accuracy of classifiers, but also to facilitate interpretable applications of results. As such, one obvious drawback of diagonal DA is that it depends on all available features, and as such is not informative as to which features are important.

As such, many variants of DA have been developed in order to integrate feature selection, many resulting in classifiers that involve a sparse linear combination of the features. Pang et al. (2009) improved diagonal DA through shrinkage and regularisation of variances, whilst others have achieved sparsity through regularising the log-likelihood for the normal model or the optimal scoring problem, using the penalty (see Tibshirani et al., 2003; Leng, 2008; Clemmensen et al., 2011). Witten and Tibshirani (2011) approached the problem through Fisher’s discrimination framework - reshaping Fisher’s discriminant problem as a biconvex problem that can be optimised using a single iterative algorithm. Whilst classifiers such as these offer significant improvements to the original diagonal DA algorithm, their successful implementation in many cases is dependent upon tuning parameters. These parameters, which in order to tune properly, requires either knowledge of the true sparsity of model parameters, or more commonly needs to be tuned through cross validation - subsequently increasing the run time of these procedures. Furthermore, many of these methods assume equality of variances between the classes, and as such do not perform well when this assumption is not met.

An alternative (and possibly, the simplest) approach to feature selection for diagonal DA classifiers is to recast the problem as one focused on multiple hypothesis testing, with such testing performed across all features and information about their significance used to drive classification. Not only does this approach have advantages in its conceptual simplicity, it also generates intuitive insights into features that drive prediction. Most importantly, however, as the number of tests and inferences made across the features increases, so too does the probability of generating erroneous significance results by chance alone, especially in a high dimensional feature space. Many procedures have been developed to control Type I errors, with the False Discovery Rate (the expected proportion of Type I errors among the rejected hypotheses) and the Family Wise Error Rate (the probability of at least one Type I error) being the most widely used control methods (see Bonferroni, 1936; Holm, 1979; Benjamini and Hochberg, 1995). Alternatively, one may control the number of significant features within the multiple hypothesis testing framework by penalising the likelihood ratio test statistic between hypothesis being compared. Not only does this control the number of selected features, this methodology allows for one to weight the features meaningfully for prediction, rather than weighting all significant features equally.

Many other classifiers exist all together outside of the realm of DA to tackle high dimensional problems. These include, but are not limited to: Random Forest (Breiman, 2001), Support Vector Machines (SVM) (Cortes and Vapnik, 1995), K-Nearest Neighbours (KNN) (Cover and Hart, 1967), and logistic regression using LASSO regularisation (Tibshirani, 1996). Although some of these classifiers have demonstrated considerable predictive performance, many lack insight into features driving their predictive processes. For example, an ensemble learner model built using Random Forest with even moderate tree depth of 5, say, may already have hundreds of nodes, and whilst such a model may have strong predictive abilities, using it as a model to explain driving features is almost impossible. The importance of interpretable classifiers is not to be understated, with in many fields such as clinical practice, complex machine learning approaches often cannot be used as their predictions are difficult to interpret, and hence are not actionable (Lundberg et al., 2017).

In this paper we propose a new high dimensional classifier, utilising the desirable generative properties of diagonal DA and improved through treating feature selection as a multiple hypothesis testing problem, resulting not only in a fast and effective classifier, but also providing intuitive and interpretable insights into features driving prediction. We call this method multiDA. Furthermore, we are able to relax the assumption between equality of group variances. We achieve this by:

1. Setting up a probabilistic frame work to consider an exhaustive set of class partitions for each variable;

2. Utilising latent variables in order to facilitate inference for determining discriminative features, with a penalised likelihood ratio test statistic estimated through maximum likelihood estimates providing the foundation of our multiple hypothesis testing approach to feature selection; and

3. Using estimated weights for each variable to provide predictions of class labels effectively.

Our multiple hypothesis testing paradigm is different to those methods which seek to control Type I errors, e.g., controlling the False Discovery Rate or Family Wise Error Rate. Instead we attempt to achieve Chernoff consistency where we aim to have asymptotically zero type I and type II errors in the sample size (see Chapter 2 of Shao, 2003, for a formal definition). We will also provide heuristic asymptotic arguments of the Chernoff consistency of our multiple hypothesis framework utilised for feature selection (see Appendix A). Our latent variable approach is similar manner to the Expectation Maximization Variable Section (EMVS) approach of Ročková and George (2014) who uses latent variables in the context of variable selection in linear models.

Lastly, we develop an efficient R package implementing our approach we name multiDA. This is available from the following web address:

http://www.maths.usyd.edu.au/u/jormerod/

The outline of this paper is as follows. In Section 2, we will discuss the role of hypothesis testing in selecting discriminative features. In Section 3 we introduce our multiDA model, in Section 4 discuss the model estimation procedure, and in Section 5 we discuss some theoretical considerations for our model. Further, in Section 6 we demonstrate the performance of our model under simulation and also with publicly available datasets.

## 2 Identifying discriminative features

Consider data which may be summarized by an matrix with elements , where denotes the value for sample of feature . For classes, class labels the -vector whose elements are integers ranging from to , with denoting the number of observations belonging to class . Further, let be a matrix of dimension , with a binary variable indicating whether observation is in class , and . We assume that class labels are assigned to all observations, and combined with the matrix form what is referred to in machine learning literature as the training dataset.

Recall that for diagonal DA, we assume the features are conditionally independent given the class label. As such, the likelihood (assuming equality of variances between the groups being compared) is simply:

 p(X,Y;θ)=n∏i=1[p∏j=1K∏k=1ϕ(xij;μjk,σ2jk)Yik×K∏k=1πYikk], (1)

where represents a Gaussian distribution with being the mean of feature in observations of class , and is its variance. Further, represents the class prior - that is, the probability of an observation being in class , and with , , and .

We will later model a latent variable extension to handle (A) multiple hypothesis testing within each variable, (B) multiple hypothesis testing across variables, (C) provide theory showing how Chernoff consistency can be maintained, and (D) how predictions can be made. However, before we do this we motivate our approach by describing how hypothesis testing can be used to identify discriminative features, first when there are two classes, then three classes, and then the general case.

### 2.1 Two classes

Consider a single feature , and suppose that . We will assume that are normally distributed, given class labels. For simplicity, we will assume that the variances are the same for each class (later we will relax this assumption). In the most basic form, one can set up the hypothesis testing framework as a simple two sample -test, that is to test the claim that the group means are equal for a particular feature , vs the alternative that they are not. The corresponding hypotheses are

 Hj0:μj1=μj2versusHj1:μj1≠μj2

where and are the class means for class 1 and class 2 respectively, as illustrated in Figure 1.

It is clear that when comparing two classes, that there is only one way to partition the groupings to determine discriminative features, that is, either the group means are equal (non discriminative), or they are not (discriminative). However, when the number of classes increases beyond the binary case, the number of groupings is more nuanced.

### 2.2 Three classes

Assuming equal variances and there are four different ways that a feature can be discriminative and one way a feature can be non-discriminative. The non-discriminative case corresponds to the case where the mean for each class is the same. The discriminative cases are

• One of the class means is different from the other two class means (there are 3 ways that this can happen); and

• Each of the class means is different from each other (there is one way that this can happen).

We will capture the ways that the classes can be grouped together via a matrix where is the number of hypothesised models. For the above situation the corresponding matrix is of the form

 S=⎡⎢⎣111211211211213⎤⎥⎦. (2)

Consider the notation for model parameters for the case for each of the hypotheses () for variable we have a single variance for each hypothesis. Define the vector as the number of partitions in each column of the set partition matrix . In the case of (2) we have . Next we let denote the mean corresponding to variable , for the th hypothesis, in the th group. In order to breakdown the parametrisation of the class means consider the cases:

• When , the 1st column of , i.e., we have one group (so that ) and and one mean . This corresponds to the null case where there is no differences in the distribution between classes for variable , i.e., variable is non-discriminative. The corresponding hypothesis is

 Hj1:xij∼ϕ(⋅;μj11,σ2j1).
• When , the 2nd column of , i.e., we have two groups (so that ) and with two means mean and . The corresponding hypothesis is

 Hj2:xij∼ϕ(⋅;μj21,σ2j2)I(yi∈{1,2})ϕ(⋅;μj22,σ2j2)I(yi=3).

Similarly for and .

• When , the 5th column of , i.e., each class has a different mean (so that ) and with three means mean , and . The corresponding hypothesis is

 Hj5:xij∼ϕ(⋅;μj51,σ2j5)I(yi=1)ϕ(⋅;μj52,σ2j5)I(yi=2)ϕ(⋅;μj23,σ2j5)I(yi=3).

### 2.3 More than three classes

For a particular feature , it is clear that for there are multiple alternate hypothesis to consider when testing against the null and alternate distributions, with there being (the Bell number of order ) ways one could partition objects (for R implementation, see Hankin, 2006). Let be a binary variable, indicating whether feature belongs in partition set , where . We note that the first column of always corresponds to the null case (i.e., no differences between the classes for feature ), with the columns increasing in degrees of freedom. We also note that all of the columns are nested within the last column, corresponding to the case in which all the classes are different for a particular feature . Further, define as a vector describing the degrees of freedom of each partition, relative to the null. For example, in (2) we have , , and .

In general we will use the following notation for model parameters for an arbitrary partition matrix . The hypotheses for variable are of the form

 Hjm:xij\lx@stackrel\scriptsize iid∼pjm(⋅,Yi;θjm)

where

 pjm(xij|Yi;θjm)=Gm∏g=1ϕ(xij;μjmg,σ2jm)I(YTiSm=g) (3)

and is a vector consisting of all of the ’s and ’s. The above uses the fact that can be used to determine which of the means should be used for sample and model .

### 2.4 Restricting the number of hypotheses

Note that we can specify many alternate options for , as can grow very quickly. One popular option for multi-class classification algorithms is to consider the one vs. rest approach, in which the multi-class problem reduces down to multiple “binary” comparisons by restricting by only considering partitions such that . In that case, is reduced, with for the previous example, , , and .

 S=⎡⎢⎣111212111121⎤⎥⎦.

This is reduction in size is particularly useful when is large. For example, suppose . For the full set partition matrix, , where as for the one vs. rest case, , heavily reducing computational time throughout our resultant algorithm.

Another possible configuration for is to only consider groupings which are ordinal. This may be particularly useful for datasets in which the classes are ordinal in nature, say, classifying stages of cancer development. An ordinal configuration of allows some reduction in computational time, whilst leveraging useful information about the classes. In this case, reduces to:

 S=⎡⎢⎣112111121213⎤⎥⎦.

If the classes had an ordinal structure, such as stages of cancer, we might assume that the mean might change from stages 1 to stages 2, but not back to the mean of stages 1 for class 3. This removes the case from (2) since under this model class 1 and class 3 are grouped together with class 2 having a different mean.

### 2.5 Heterogeneity of group variances

In many DA papers published previously, the group specific variances are assumed to be equal. This assumption leads what is known as a Linear Discriminant Analysis (LDA) classifier since the resultant classification boundary is linear in the predictor variables. We will refer to our implementation of LDA as multiLDA. However, if we generalise and do not make such an assumption, the resultant classification boundary is instead quadratic and our classifier is in the form of what is known as a Quadratic Discriminant Analysis (QDA) classifier. The QDA case leads to variance estimates required for each feature (as opposed to estimates required under the multiLDA framework). The model for when assuming unequal variances is:

 pjm(xij|Yi;θjm)=Gm∏g=1ϕ(xij;μjmg,σ2jmg)I(YTiSm=g)

and will be referred to as multiQDA. Figure 2 illustrates some simple hypotheses we consider in this paper, including cases in which variances are not assumed to be equal.

## 3 The multiDA model

We now describe our multiDA model which is capable of handling not only multiple hypotheses across variables, but also multiple hypotheses within variables. When adapting the diagonal DA model to account for multiple hypothesis testing, the representation of the likelihood is more complicated than diagonal DA described previously. We now take the product over the number of hypothesis to be tested, and account for the different number of normal distributions in each hypothesis test. We account for this, and now consider fitting a model of the form:

 p(X;θ|Y,Γ)=p∏j=1M∏m=1[pjm(xj|Yi;θjm)]γjm (4)

where

 (Yi1,…,YiK)|π∼Multinomial(1;π1,…,πK),

with and being the class probabilities to be estimated, the are latent variables satisfying

 (γj1,…,γjM)|ρj∼Multinomial(1;ρ1,…,ρM),

with and , and is a model likelihood for variable and model with corresponding parameters . The play the role of selecting which of the models is used for variable . Here . The are prior probabilities for the th hypothesis, in which we are implicitly assuming that for each the same hypothesis is being tested for each . We will treat the prior hyperparameters as tuning parameters set by the user (discussed later). We can now fully represent our model as:

 p(X,Y,Γ;ϑ)=[n∏i=1p∏j=1M∏m=1{pjm(xj,y;θjm)}γjm]×[K∏k=1πYikk]×[p∏j=1M∏m=1ργjmm] (5)

where are likelihood parameters to be estimated.

## 4 Model estimation

The first goal of the multiDA is to determine which features partition the data into meaningful groups, that is, estimating posterior probabilities of the ’s. To do this, we will fit this model using maximum likelihood over . The likelihood is given by

 L(ϑ)=∑Γp(X|Y,Γ;θ)p(Y;π)p(Γ;ρ).

where denotes summation over all possibilities of .

Note that for any fixed value of , the value of the which maximises is the same. Hence, the values of which maximise must be these values which are given by

 ˆθjm=argmaxθjm{pjm(xj,Y;θjm)}

where .

For the multiLDA case the explicit log-likelihood given and is

 logL(ϑ|Y,Γ) =n∑i=1[p∑j=1M∑m=1γjm{Gm∑g=1∑i∈Amglogϕ(xij;μjmg;σ2jm)+log(ρv)}] +K∑k=1nklog(πk).

where and . Next, let

 nmg=|Amg|,x∙jmg=∑i∈Amgxijandx2∙mgj=∑i∈Amgx2ij.

Then the MLEs for the ’s and ’s may be written as

 ˆμjmg=x∙jmgnmgand% ˆπk=nkn

respectively with and . The MLEs for the ’s for the multiLDA and multiQDA cases are

 ˆσ2jm=1n[Gm∑g=1{x2∙jmg−(x∙jmg)2nmg}]andˆσ2jmg=1nmg[x2∙jmg−(x∙jmg)2nmg]

respectively for each index , and .

### 4.1 Fitting the latent parameters

The latent parameters are indicators for each hypothesis test and for each variable . We will use the posterior probabilities to determine which hypothesis is most likely. These posterior probabilities become variable specific weights which alter the contribution for each feature when making predictions. Next we note that

 logp(X,Γ|Y;ˆϑ)=p∑j=1M∑m=1γjm{12λjm(xj,Y)+log(ρm/ρ1)}+logpj1(xj,Y;ˆθj1)+log(ρ1),

where are likelihood ratio test (LRT) statistics. Note that for each which serves as the “null” hypothesis for each variable (that the distribution of is the same for each class). For the multiLDA case, we have

 λjm(xj,Y)=nlog(ˆσ2j1)−nlog(ˆσ2jm)

while for multiQDA we have

 λjm(xj,Y)=nlog(ˆσ2j1)−Gm∑g=1ngmlog(ˆσ2jmg),

then

 ˆγjm(xj,Y)=P(γjm=1|xj,Y,ˆθjm)=exp[12λjm(xj,Y)+log(ρm/ρ1)]M∑ℓ=1exp[12λjℓ(xj,Y)+log(ρℓ/ρ1)]. (6)

As such, the can be thought of as a function of a penalised likelihood ratio test statistic, with penalisation provided by the value of . The choice of this penalty trades off type I and type II errors, which we discuss further in Section 5.

### 4.2 Prediction of new unlabelled data points

Let us now consider the problem of predicting the class vector using a new -vector of predictors . To perform this task we consider the predictive distribution

 p(Y∗|X,Y,x;ˆϑ) =∑Γp(Y∗|X,Y,x,Γ;ˆϑ). (7)

We note that the above sum is computationally intractable. Instead of evaluating this sum exactly we note that

 logp(Y∗|X,Y,x;ˆϑ)=log[∑Γp(Γ|X,Y;ˆϑ)p(Y∗|X,Y,x,Γ;ˆϑ)p(Γ|X,Y;ˆϑ)]≥∑Γp(Γ|X,Y;ˆϑ)log[p(Y∗|X,Y,x,Γ;ˆϑ)−logp(Γ|X,Y;ˆϑ)]=logp–(Y∗|X,Y,x;ˆϑ)

where we have used Jensen’s Inequality in order to approximate (7).

Next, define as the cumulative sum of the maximum number of groups in each column of , i.e., , such that . Define the allocation matrix , of same dimension as , whose elements are given by This allocation matrix allocates which and components (if multiQDA) belong to each class. For example, for and its respective , the resultant is given by:

 A=⎡⎢⎣1247813469125610⎤⎥⎦.

For mulitLDA we have then for a single prediction that is a particular class label with approximate probability

 p–(Y∗|X,Y,x;ˆϑ) ∝exp[K∑k=1Y∗k{p∑j=1M∑m=1ˆγjmlogϕ(x∗j;ˆμjakm;ˆσ2jm)+ˆπklog(ˆπk)}].

Let

 ηk=p∑j=1M∑m=1ˆγjmlogϕ(x∗j;ˆμjakm;ˆσ2jm)+ˆπklog(ˆπk).

For multiQDA replace with in the above two equations for and . Lastly, since follows a multinomial distribution we have

 Y∗|X,Y,x;ˆϑ\lx@stackrel\scriptsize approx.∼% Multinomial(1;ˆY∗1,…,ˆY∗K)

where .

## 5 Theoretical considerations

Now, recall from Section 4 that is fixed, with no restrictions on the besides that and that , . Choosing these parameters via, say, some cross-validation procedure would be computationally expensive. Instead we choose the values of these using heuristic arguments based on asymptotic theory.

Suppose that we define the hypothesis testing error as

 E=p∑j=1M∑m=1|ˆγjm(xj,Y)−γ0jm|

where , , . We will assume that for some for all . Note that it can be shown that

 E=2M∑m=2∑j∈Omˆγjm(xj,Y)+2M∑m=1∑j∈Umˆγjm(xj,Y)≡EO+EU,\par

where is the set of overfitting models for the th hypothesis, is the set of underfitting models for the th hypothesis, and the error terms and correspond to the the overfitting and underfitting models respectively. We can rewrite as

 EO=2M∑m=2∑j∈Okexp[12λjm(Xj,Y)+log(ρm/ρ1)]M∑ℓ=1exp[12λjℓ(Xj,Y)+log(ρℓ/ρ1)]=2M∑m=2∑j∈Okexp[12˜λjm(xj,Y)+log(ρℓ/ρ1)]∑Mℓ=1exp[12˜λjℓ(xj,Y)+log(ρℓ/ρ1)]\par

where .

In the Supplementary material we show that can be chosen to achieve any desired penalty. Suppose that where is a tuning parameters. Then

 12λjm(Xj,Y)−Cνm=ICj1−IC% jm

where

 ICjm=−2pjm(xj|Y;ˆθjm)+Cdm

which can be thought of as a flexible information criteria. Note that

1. corresponds to the Bayesian Information Criterion (BIC) (Schwarz, 1978);

2. corresponds to the Akaike Information Criterion (AIC) (Akaike, 1974); and

3. corresponds to the Extended Bayesian Information Criterion (EBIC) (Chen and Chen, 2008).

Let be an approximation of where the are replaced with random draws under the data generating distribution so that the ’s appearing in the above expression for are asymptotically distributed as chi-squared random variables with degrees of freedom . Using this chi-squared approximation of the LRT using the EBIC penalty above we show that provided . Additional details and the proof of this claim can be found in Appendix A.

The EBIC, unlike the BIC (Schwarz, 1978), rejects the assumption that the prior probabilities are uniform over all the possible models to be considered. Rather, the prior probability under the EBIC is inversely proportional to the size of the model class, reducing the number of features selected. The EBIC has been shown in Chen and Chen (2008) to work well in many contexts, such as high dimensional generalised linear models. However, we note if this algorithm is used in the scenario when , or if the true model space is not sparse, the EBIC may no longer be consistent. Further, flexibility can be provided through trading off FDR for positive selection rate (PSR) in order to yield improved predictive results.

As such, we propose two penalties available to be used in this algorithm:

1. A weaker penalty, in the form of the BIC (Schwarz, 1978), especially useful if used when , or if improved PSR is preferred to low FDR (Chen and Chen, 2008). In this case ; and,

2. a stronger (default) penalty in the form of the EBIC.

## 6 Numerical Results

We will now assess the performance of our multiDA method, using both simulated and publicly available data to demonstrate ability in feature selection and in prediction under many different assumptions.

In Section 6.1, we will describe the syntax for our R package multiDA. In Section 6.2 we discuss competing methods to our own, and in the following Sections 6.3-6.4, we will compare our multiDA algorithm with such methods with simulated data. Lastly, Section 6.5 compares each method publicly available data.

All of the following results were obtained in the R version 3.4.3 (R Core Team, 2014) and all figures were developed throughout this paper using the R package ggplot2 (Wickham, 2009). Multicore comparisons were run on a 64 bit Windows 10 Intel i7-7700HQ quad core CPU at up to 3.8GHz with 16GB of RAM.

### 6.1 Using multiDA via the R package multiDA

The syntax for multiDA is relatively straightforward:

multiDA(mX,vy,penalty, equal.var,set.options)

where the arguments of the multiDA function are explained below.

• mX - data matrix with rows and columns.

• vy - a vector of length class labels, which can be a numerical or factor input.

• penalty - options used to define the penalty which will be used in the multiDA algorithm, as discussed in Section 5, consisting of the options EBIC (default) and BIC.

• equal.var - the choice to run a multiLDA (equal.var=TRUE, default) or multiQDA (equal.var=FALSE) algorithm.

• set.options - the matrix partition to be used as described in Section 2. Options include exhaustive (default), onevsrest, onevsall, ordinal, and sUser, where sUser is a partition matrix provided by the user.

To predict new class labels, a generic S3 predict(object,newdata) command can be called with $vypred returning predicted class labels and$probabilities returning a matrix of class probabilities for each sample. An example finding a re-substitution error rate is provided below, using the SRBCT dataset as described in Section 6.5.

vy = SRBCT$vy mX = SRBCT$mX
res = multiDA(mX, vy, penalty‘‘EBIC", equal.var=T, set.options=‘‘exhaustive")
vals = predict(res, newdata=mX)\$vy.pred
rser = sum(vals!=vy)/length(vy)

### 6.2 Competing Methods

We note that there is a long list of machine learning (ML) algorithms in the literature and as such we focus on comparison with representative algorithms from different broad classes of ML approaches. These are listed in Table 1 below.

Supplementary material contains the R code used for complete transparency.

### 6.3 A simulation study - feature selection

In this section we assess the ability of multiDA to select correct informative features. In particular we will empirically verify the theory described in Section 5.

We consider sample sizes to in increments of , , , , , , and , with the samples being equally distributed among the classes in each simulation. Note, we only run the simulation for when , as we are purely interested in results in high dimensional space. Further, we consider a sparse feature space, such that only of the features are discriminative, with the set of such features determined to be discriminative.

In each simulation setting, data is generated as follows: For , sample from the space of non-null partitions of , e.g., from a total of non-null hypotheses for . Simulate where represents the group index of the normal distribution for the partition for feature , and such that the mean shift for each differing normal distribution is . If , .

We simulate data using the above 20 times and the average total proportion incorrectly selected features (given by ) over these 20 replications is displayed in Figure 3. It is clear that as , the error rates asymptotically converge to 0, regardless of the values of and . However, in the case when and small, the error increases (as to be expected) but is no bigger than when , and .

### 6.4 A simulation study - prediction

In this study we assess the predictive performance of the multiDA algorithm with simulated data. Four simulations were considered, with two simulating under the assumption of independent features, and the other two simulating more realistic data with multivariate normal data generated using sparse covariance matrices. In all simulations, we consider predicting the four classes, with our data matrices of dimension . As in Section 6.3, we consider a highly sparse set of discriminative features, utilising the sets as defined previously. Finally, as before, samples are spread equally among the four classes. The specifics of simulation settings are detailed below.

Simulations with Independent Features

1. Mean shift with independent features, equal group variances
For , sample from the space of non-null partitions of (total of 15 non null for ). Simulate where represents the group index of the normal distribution for the partition for feature , such that the mean shift for each differing normal distribution is . If , .

2. Mean shift with independent features, unequal group variances
Same as 1), however, the group variances are allowed to change within each partition, such that for , simulate , where is defined as in 1), and such that the scale in variance for each normal distribution is .

Simulations with Dependent Features

For the simulations below, define the set such that , and .

1. Dependent features, equal group covariances
For , and for . is generated as a sparse covariance matrix, such that blocks of size have high correlation.

2. Dependent features, unequal group covariances
Identical to (3), except that for , , as in this case we consider the scenario in which group covariances differ.

The and covariance matrices are constructed as follows. Let be a matrix where the diagonal entries and percent of the off diagonal entries are generated using independent normally distributed random variables. Then the matrices and are constructed by taking 10 matrices with with , forming the block diagonal matrix consisting of the blocks and then permuting the rows and columns of the resulting matrix.

50 x 5 fold cross validation results are shown in Figure 4.

### 6.5 Performance on publicly available datasets for benchmark data comparison

We run our multiDA algorithm, among others, on three separate publicly available data sets, for benchmark data comparison. For all three datasets, we run a 50 x 5 fold cross validation, as well as report the algorithmic run times for running the algorithm once on the full data set.

TCGA Breast Cancer data

Microarray data from the Cancer Genome Atlas (TCGA) was used to classify 5 different subtypes of breast cancer, namely “Basal”, “HER2”, “Luminal A”, and “Luminal B”, as well as distinguish between healthy tissue, with the data consisting of 266 samples and a feature set of size 15803. As the breast cancer subtypes were defined intrinsically based on PAM50 gene expression levels, these genes were removed first in order to see if other complementary genes could be used in order to predict cancer subtype. Cross validation results are shown in Figure 5. Results for the multiDA algorithms were achieved using default settings for our algorithm.

Small Round Blue Cell Tumors (SRBCT) data

The SRBCT dataset (Khan et al., 2001) looks at classifying 4 classes of different childhood tumours sharing similar visual features during routine histology. These classes include Ewing’s family of tumours (EWS), neuroblastoma (NB), Burkitt’s lymphoma (BL), and rhabdomyosarcoma (RMS). Data was collected from 83 cDNA microarrays, with 1586 features present after filtering for genes with zero median absolute deviation. Cross validation results are shown in Figure 6. Results for the multiDA algorithms were achieved using default settings for our algorithm.

Melanoma data

The melanoma dataset has been analysed by (Mann et al., 2013) as a binary classification problem. The dataset consist of samples of Affymetrix arrays with genes measured on each array. The response variable is the patient’s prognosis, which is compromised of three classes - good, middle, and poor prognosis groups. Class labels have been defined as follows, using the status of the patient and their survival times:

 yi=⎧⎨⎩1 if ti<2 and Dead % Melanoma \quad\quad\quad      Poor" prognosis.2 if 26 and Alive No Sign of Relapse \quadGood" % prognosis.

Samples that do not meet the criteria above are excluded, leaving samples for analysis. Further, we implemented a filtering step that excludes under-expressed genes (all three class medians below 7), resulting in genes to be used for analysis. Cross validation results are shown in Figure 7.

Results for the multiDA algorithms were achieved using default settings except for the penalty - in which the BIC setting was used. The use of a weaker penalty is justified through an extreme trade-off between PSR and FDR. Investigation into features selected using the EBIC penalty revealed that those deemed significant by multiQDA to be driven by extreme outliers - due to the strong penalty ensuring that only those with a maximal LRT are deemed discriminative, greatly reducing the PSR. As such, we have selected the BIC penalty, acknowledging the likely increase in FDR, however improving predictive performance by increasing PSR (strong predictive performance of DQDA indicates importance of PSR).

### 6.6 Timings

While we recognise that run time of each algorithm depends on various factors involving hardware and implementation chosen, we provide the timings in Table 2 below as indicative in the context of the details outlined in Section 6. These times suggest that multiLDA and multiQDA both run in a reasonable amount of time in comparison to the competing methods considered in this paper.

### 6.7 Discussion of results

Through simulation studies and applications on publicly available data, we are able to analyse the strengths of our multiDA classifier. Our simulation study examining the feature selection component of multiDA demonstrates the ability of multiDA to accurately select the correct feature set as the sample size increases, confirming our theoretical considerations. As the number of classes increases, the sample size needed to select the correct features increase, however this is to be expected.

Examining predictive strength through both simulations and benchmark data analysis, it is clear that the multiLDA algorithm excels when group variances are assumed to be equal (Simulations 1 and 3), and performs poorly when this assumption is not met (Simulations 2 and 4, Melanoma microarray analysis). The converse is true for multiQDA, which also shows improvements to all LDA based methods in the unequal variances cases as well as out performing plain DQDA in Simulation 2 and also showing strengths in the Melanoma example. In the independent features case for our simulation studies, the other DA methods (penLDA, NSC, DLDA, DQDA) do well as expected, however their performance is much worse in Simulation 2 (as expected). SVM has poor performance overall in both simulated and benchmark data examples, with KNN, LASSO, and RandomForest (RF) performing better in some simulations as compared to others.

It is also clear that the multiDA classifier can perform well when assumptions of independence of features is not met. This is demonstrated in Simulations 3 and 4 when we consider data with a non diagonalised covariance structure, and also in the benchmark data analysis examples, where in gene expression data we expect some degree of correlation between the features.

In all examples shown with the exception of Simulation 1, the multiDA classifiers have performed as well or better than all DA-like methods, consistently outperformed methods such as KNN and SVM, and have produced competitive results with Random Forest and the LASSO on multiple datasets.

## 7 Conclusion and future work

We have introduced the multiDA classifier in order to provide an effective alternative to discriminant analysis in high dimensional data. We have utilised a multiple hypothesis testing paradigm in order to select relevant features for our algorithm, utilising latent variables and penalised likelihood ratio tests to do so. Not only is the procedure intuitive and fast, we have shown the feature selection process is consistent given appropriate penalties. Further, as shown in simulation and benchmark data analysis studies, our classifier yields prediction results that are competitive with not only other discriminant analysis methods, but also other non linear machine learning methods as well. We believe that the multiDA classifier is a useful tool for any analyst wanting fast and accurate classification results for high dimensional Gaussian data.

Future work can be done to extend the scope of the distributions the multiDA classifier can effectively model. For example, the datasets provided in Section 6.5 were microarray data. By extending the multiDA classifier to handle negative binomial or Poisson data, datasets such a RNA-Seq could potentially be well predicted using the core ideas of multiDA.

## Acknowledgements

We are grateful for the data provided to us, processed as described in (The Cancer Genome Atlas Network, 2012) by Kim-Anh Le Cao (University of Melbourne). This project benefited from conversations with Samuel Müller. This research was partially supported by an Australian Postgraduate Award (Romanes), an Australian Research Council Early Career Award DE130101670 (Ormerod) and Australian Research Council Discovery Project grant DP170100654 (Yang, Ormerod); Australia NHMRC Career Developmental Fellowship APP1111338 (Yang).

## References

• Akaike (1974) Akaike, H., 1974. A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control 19 (6), 716–723.
• Benjamini and Hochberg (1995) Benjamini, Y., Hochberg, Y., 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological) 57 (1), 289–300.
• Bickel and Levina (2004) Bickel, P. J., Levina, E., 2004. Some theory for Fisher’s linear discriminant function, ‘näive Bayes’, and some alternatives when there are many more variables than observations. Bernoulli 10 (6), 989–1010.
• Bonferroni (1936) Bonferroni, C. E., 1936. Teoria statistica delle classi e calcolo delle probabilità. Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commerciali di Firenze 8, 3–62.
• Breiman (2001) Breiman, L., Oct 2001. Random forests. Machine Learning 45 (1), 5–32.
• Chen and Chen (2008) Chen, J., Chen, Z., 2008. Extended Bayesian information criteria for model selection with large model spaces. Biometrika 95 (3), 759–771.
• Clemmensen et al. (2011) Clemmensen, L., Hastie, T., Witten, D., Ersbøll, B., 2011. Sparse discriminant analysis. Technometrics 53 (4), 406–413.
• Cortes and Vapnik (1995) Cortes, C., Vapnik, V., Sep 1995. Support-vector networks. Machine Learning 20 (3), 273–297.
• Cover and Hart (1967) Cover, T., Hart, P., January 1967. Nearest neighbor pattern classification. IEEE Transactions on Information Theory 13 (1), 21–27.
• Dudoit et al. (2002) Dudoit, S., Fridlyand, J., Speed, T. P., 2002. Comparison of discrimination methods for the classification of tumors using gene expression data. Journal of the American Statistical Association 97 (457), 77–87.
• Fisher (1936) Fisher, R. A., 1936. The use of multiple measurements in taxonomic problems. Annals of Eugenics 7 (2), 179–188.
• Friedman et al. (2010) Friedman, J., Hastie, T., Tibshirani, R., 2010. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software 33 (1), 1–22.
• Friedman (1989) Friedman, J. H., 1989. Regularized discriminant analysis. Journal of the American Statistical Association 84 (405), 165–175.
• Gasull et al. (2015) Gasull, A., López-Salcedo, J. A., Utzet, F., 2015. Maxima of Gamma random variables and other Weibull-like distributions and the Lambert W function. TEST 24 (4), 714–733.
• Hankin (2006) Hankin, R. K. S., May 2006. Additive integer partitions in R. Journal of Statistical Software, Code Snippets 16.
• Hastie et al. (2014) Hastie, T., Tibshirani, R., Narasimhan, B., Chu, G., 2014. pamr: PAM: prediction analysis for microarrays. R package version 1.55.
• Holm (1979) Holm, S., 1979. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6 (2), 65–70.
• Khan et al. (2001) Khan, J., Wei, J. S., Ringnér, M., Saal, L. H., Ladanyi, M., Westermann, F., Berthold, F., Schwab, M., Antonescu, C. R., Peterson, C., Meltzer, P. S., Jun 2001. Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nature Medicine 7, 673 –679.
• Leng (2008) Leng, C., 2008. Sparse optimal scoring for multiclass cancer diagnosis and biomarker detection using microarray data. Computational Biology and Chemistry 32 (6), 417 –425.
• Liaw and Wiener (2002) Liaw, A., Wiener, M., 2002. Classification and Regression by randomForest. R News 2 (3), 18–22.
• Lundberg et al. (2017) Lundberg, S. M., Nair, B., Vavilala, M. S., Horibe, M., Eisses, M. J., Adams, T., Liston, D. E., Low, D. K.-W., Newman, S.-F., Kim, J., Lee, S.-I., 2017. Explainable machine learning predictions to help anesthesiologists prevent hypoxemia during surgery. bioRxiv.
URL https://www.biorxiv.org/content/early/2017/10/21/206540
• Mann et al. (2013) Mann, G., Pupo, G., Campain, A., Carter, C., Schramm, S., Pianova, S., Gerega, S., De Silva, C., Lai, K., Wilmott, J., Synnott, M., Hersey, P., Kefford, R., Thompson, J., Yang, J., Scolyer, R., Feb 2013. BRAF mutation, NRAS mutation, and the absence of an immune-related expressed gene profile predict poor outcome in patients with stage III melanoma. Journal of Investigative Dermatology 133 (2), 509–517.
• Meyer et al. (2017) Meyer, D., Dimitriadou, E., Hornik, K., Weingessel, A., Leisch, F., 2017. e1071: Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien. R package version 1.6-8.
• Ormerod et al. (2017) Ormerod, J. T., Stewart, M., Yu, W., Romanes, S. E., 2017. Bayesian hypothesis tests with diffuse priors: Can we have our cake and eat it too?
URL arxiv.org/pdf/1710.09146.pdf
• Pang et al. (2009) Pang, H., Tong, T., Zhao, H., 2009. Shrinkage-based diagonal discriminant analysis and its applications in high-dimensional data. Biometrics 65 (4), 1021–1029.
• R Core Team (2014) R Core Team, 2014. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
• Ramey (2017) Ramey, J. A., 2017. sparsediscrim: Sparse and Regularized Discriminant Analysis. R package version 0.2.4.
• Ročková and George (2014) Ročková, V., George, E. I., 2014. EMVS: The EM approach to Bayesian variable selection. Journal of the American Statistical Association 109 (506), 828–846.
• Schwarz (1978) Schwarz, G., 03 1978. Estimating the dimension of a model. The Annals of Statistics 6 (2), 461–464.
• Shao (2003) Shao, J., 2003. Mathematical Statistics. Springer Texts in Statistics. Springer.
• The Cancer Genome Atlas Network (2012) The Cancer Genome Atlas Network, Sep 2012. Comprehensive molecular portraits of human breast tumours. Nature 490, 61–70.
• Tibshirani (1996) Tibshirani, R., 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58 (1), 267–288.
• Tibshirani et al. (2003) Tibshirani, R., Hastie, T., Narasimhan, B., Chu, G., 02 2003. Class prediction by nearest shrunken centroids, with applications to dna microarrays. Statistical Science 18 (1), 104–117.
• van der Vaart (1998) van der Vaart, A. W., 1998. Asymptotic statistics, volume 3 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge.
• Venables and Ripley (2002) Venables, W. N., Ripley, B. D., 2002. Modern Applied Statistics with S, 4th Edition. Springer, New York.
• Vuong (1989) Vuong, Q. H., 1989. Likelihood ratio tests for model selection and nonnested hypotheses. Econometrica 57 (2), 307–333.
• Wickham (2009) Wickham, H., 2009. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York.
• Witten (2015) Witten, D., 2015. penalizedLDA: Penalized classification using Fisher’s Linear Discriminant. R package version 1.1.
• Witten and Tibshirani (2011) Witten, D. M., Tibshirani, R., 2011. Penalized classification using Fisher’s linear discriminant. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73 (5), 753–772.

## Appendix A - Theory

Let for some true parameter vector . Define the log-likelihood for variable and hypothesis as with corresponding MLE and “pseudo-true” value of as

 ˆθjm=argmaxθjm{ℓjm(θjm)}andθ∗jm=argmaxθjm{E(n−1ℓjm(θjm))},

respectively. We will assume conditions on the likelihood and parameter space such that for , . Using the theory summarised in Ormerod et al. (2017) based on Vuong (1989) and van der Vaart (1998) we have two main cases to consider.

• [Underfitting case] – Suppose for some . Then

 12n[λjmj(Xj)−λjm(Xj)]\lx@stackrelP→ℓ∗jmj−ℓ∗jm=Δjm>0

and so .

• [Overfitting case] – Suppose for some and let . Then

 λjm(Xj)−λjmj(Xj)\lx@stackrelD→χ2˜νjm

The following lemma will be useful later.

Lemma 1 (Gasull et al., 2015): If , , are independent random variables, and , then

 12Mp−[log(p)+(ν/2−1)loglog(p)−logΓ(ν/2)]\lx@stackrelD→G,

as where is a Gumbel distributed random variable.

Define and . Here is the set of true variables over the th set of hypotheses, and is the union of over-fitting and true models over the th set of hypotheses. We define and decompose the the error as

 E=p∑j=11−ˆγjmj(Xj)+p∑j=1∑m≠mjˆγjm(Xj)\par\par=2p∑j=1∑m≠mjˆγjm(Xj)=2M∑m=1∑j∉Jmˆγjm(Xj)=2M∑m=2∑j∈Omˆγjm(Xj)Overfitting models+2M∑m=1∑j∈Umˆγjm(Xj)% Underfitting models\lx@stackrel△=EO+EU,\par

where , and .

Note that for the index is summation does not include since the null model cannot be an overfitting model. Next, we consider where the true model is used as the null hypothesis and rewrite as

 EO\par\par=2M∑m=2∑j∈Omexp[12˜λjm(Xj)−˜νk{log(n)+2log(p)}]∑Mℓ=1exp</