Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data

Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data

Mohak Shah, Mario Marchand, and Jacques Corbeil M. Shah is with the Centre for Intelligent Machines, McGill University, Montreal, Canada, H3A 2A7.
E-mail: mohak@cim.mcgill.ca M. Marchand is with the Department of Computer Science and Software Engineering, Pav. Adrien Pouliot, Laval University, Quebec, Canada, G1V-0A6.
Email: Mario.Marchand@ift.ulaval.ca J. Corbeil is with CHUL Research Center, Laval University, Quebec (QC) Canada, G1V-4G2.
Email: Jacques.Corbeil@crchul.ulaval.ca
Abstract

One of the objectives of designing feature selection learning algorithms is to obtain classifiers that depend on a small number of attributes and have verifiable future performance guarantees. There are few, if any, approaches that successfully address the two goals simultaneously. Performance guarantees become crucial for tasks such as microarray data analysis due to very small sample sizes resulting in limited empirical evaluation. To the best of our knowledge, such algorithms that give theoretical bounds on the future performance have not been proposed so far in the context of the classification of gene expression data. In this work, we investigate the premise of learning a conjunction (or disjunction) of decision stumps in Occam’s Razor, Sample Compression, and PAC-Bayes learning settings for identifying a small subset of attributes that can be used to perform reliable classification tasks. We apply the proposed approaches for gene identification from DNA microarray data and compare our results to those of well known successful approaches proposed for the task. We show that our algorithm not only finds hypotheses with much smaller number of genes while giving competitive classification accuracy but also have tight risk guarantees on future performance unlike other approaches. The proposed approaches are general and extensible in terms of both designing novel algorithms and application to other domains.

{keywords}

Microarray data classification, Risk bounds, Feature selection, Gene identification.

I Introduction

An important challenge in the problem of classification of high-dimensional data is to design a learning algorithm that can construct an accurate classifier that depends on the smallest possible number of attributes. Further, it is often desired that there be realizable guarantees associated with the future performance of such feature selection approaches. With the recent explosion in various technologies generating huge amounts of measurements, the problem of obtaining learning algorithms with performance guarantees has acquired a renewed interest.

Consider the case of biological domain where the advent of microarray technologies (Eisen and Brown, 1999; Lipshutz et al., 1999) have revolutionized the outlook on the investigation and analysis of genetic diseases. In parallel, on the classification front, many interesting results have appeared aiming to distinguish between two or more types of cells, (e.g. diseased vs. normal, or cells with different types of cancers) based on gene expression data in the case of DNA microarrays (see, for instance, (Alon et al., 1999) for results on Colon Cancer, (Golub et al., 1999) for Leukaemia). Focusing on very few genes to give insight into the class association for a microarray sample is quite important owing to a variety of reasons. For instance, a small subset of genes is easier to analyze as opposed to the set of genes output by the DNA microarray chips. It also makes it relatively easier to deduce biological relationships among them as well as study their interactions. An approach able to identify a very few number of genes can facilitate customization of chips and validation experiments– making the utilization of microarray technology cheaper, affordable, and effective.

In the view of a diseased versus a normal sample, these genes can be considered as indicators of the disease’s cause. Subsequent validation study focused on these genes, their behavior, and their interactions, can lead to better understanding of the disease. Some attempts in this direction have yielded interesting results. See, for instance, a recent algorithm proposed by Wang et al. (2007) involving the identification of a gene subset based on importance ranking and subsequently combinations of genes for classification. Another example is the approach of Tibshirani et al. (2003) based on nearest shrunken centroids. Some kernel based approaches such as the BAHSIC algorithm (Song et al., 2007) and their extensions (e.g., (Shah and Corbeil, 2010) for short time-series domains) have also appeared.

The traditional methods used for classifying high-dimensional data are often characterized as either “filters” (e.g. (Furey et al., 2000; Wang et al., 2007) or “wrappers” (e.g. (Guyon et al., 2002)) depending on whether the attribute selection is performed independent of, or in conjunction with, the base learning algorithm.

Despite the acceptable empirical results achieved by such approaches, there is no theoretical justification of their performance nor do they come with a guarantee on how well will they perform in the future. What is really needed is a learning algorithm that has provably good performance guarantees in the presence of many irrelevant attributes. This is the focus of the work presented here.

I-a Contributions

The main contributions of this work come in the form of formulation of feature selection strategies within well established learning settings resulting in learning algorithms that combine the tasks of feature selection and discriminative learning. Consequently, we obtain feature selection algorithms for classification with tight realizable guarantees on their generalization error. The proposed approaches are a step towards more general learning strategies that combine feature selection with the classification algorithm and have tight realizable guarantees. We apply the approaches to the task of classifying microarray data where the attributes of the data sample correspond to the expression level measurements of various genes. In fact the choice of decision stumps as learning bias has in part motivated by this application. The framework is general and extensible in a variety of ways. For instance, the learning strategies proposed in this work can readily be extended to other similar tasks that can benefit from this learning bias. An immediate example would be classifying data from other microarray technologies such as in the case of Chromatin Immunoprecipitation experiments. Similarly, learning biases other than the conjunctions of decision stumps, can also be explored in the same frameworks leading to novel learning algorithms.

I-B Motivation

For learning the class of conjunctions of features, we draw motivation from the guarantee that exists for this class in the following form: if there exists a conjunction, that depends on out of the input attributes and that correctly classifies a training set of examples, then the greedy covering algorithm of Haussler (1988) will find a conjunction of at most attributes that makes no training errors. Note the absence of dependence on the number of input attributes. The method is guaranteed to find at most attributes and, hence, depends on the number of available samples but not on the number of attributes to be analyzed.

We propose learning algorithms for building small conjunctions of decision stumps. We examine three approaches to obtain an optimal classifier based on this premise that mainly vary in the coding strategies for the threshold of each decision stump. The first two approaches attempt to do this by encoding the threshold either with message strings (Occam’s Razor) or by using training examples (Sample Compression). The third strategy (PAC-Bayes) attempts to examine if an optimal classifier can be obtained by trading off the sparsity111This refers to the number of decision stumps used. of the classifier with the magnitude of the separating margin of each decision stump. In each case, we derive an upper bound on the generalization error of the classifier and subsequently use it to guide the respective algorithm. Finally, we present empirical results on the microarray data classification tasks and compare our results to the state-of-the-art approaches proposed for the task including the Support Vector Machine (SVM) coupled with feature selectors, and Adaboost. The preliminary results of this work appeared in (Marchand and Shah, 2005).

I-C Organization

Section II gives the basic definitions and notions of the learning setting that we utilize and also characterizes the hypothesis class of conjunctions of decision stumps. All subsequent learning algorithms are proposed to learn this hypothesis class. Section III proposes an Occam’s Razor approach to learn conjunctions of decision stumps leading to an upper bound on the generalization error in this framework. Section IV then proposes an alternate encoding strategy for the message strings using the Sample Compression framework and gives a corresponding risk bound. In Section V, we propose a PAC-Bayes approach to learn conjunction of decision stumps that enables the learning algorithm to perform an explicit non-trivial margin-sparsity trade-off to obtain more general classifiers. Section VI then proposes algorithms to learn in the three learning settings proposed in Sections IIIIV and V along with a time complexity analysis. Note that the learning (optimization) strategies proposed in Section VI do not affect the respective theoretical guarantees of the learning settings. The algorithms are evaluated empirically on real world microarray datasets in Section VII. Section VIII presents a discussion on the results and also provides an analysis of the biological relevance of the selected genes in the case of each dataset, and their agreement with published findings. Finally, we conclude in Section IX.

Ii Definitions

The input space consists of all -dimensional vectors where each real-valued component for . Each attribute for instance can refer to the expression level of gene . Hence, and are, respectively, the a priori lower and upper bounds on values for . The output space is the set of classification labels that can be assigned to any input vector . We focus here on binary classification problems. Thus . Each example is an input vector with its classification label chosen i.i.d. from an unknown distribution on . The true risk of any classifier is defined as the probability that it misclassifies an example drawn according to :

where if predicate is true and otherwise. Given a training set of examples, the empirical risk on , of any classifier , is defined according to:

The goal of any learning algorithm is to find the classifier with minimal true risk based on measuring empirical risk (and other properties) on the training sample .

We focus on learning algorithms that construct a conjunction of decision stumps from a training set. Each decision stump is just a threshold classifier defined on a single attribute (component) . More formally, a decision stump is identified by an attribute index , a threshold value , and a direction (that specifies whether class 1 is on the largest or smallest values of ). Given any input example , the output of a decision stump is defined as:

We use a vector of attribute indices such that where is the number of indices present in (and thus the number of decision stumps in the conjunction) 222Although it is possible to use up to two decision stumps on any attribute, we limit ourselves here to the case where each attribute can be used for only one decision stump.. Furthermore, We use a vector of threshold values and a vector of directions where for . On any input example , the output of a conjunction of decision stumps is given by:

Finally, any algorithm that builds a conjunction can be used to build a disjunction just by exchanging the role of the positive and negative labeled examples. In order to keep our description simple, we describe here only the case of a conjunction. However, the case of disjunction follows symmetrically.

Iii An Occam’s Razor Approach

Our first approach towards learning the conjunction (or disjunction) of decision stumps is the Occam’s Razor approach. Basically, we wish to obtain a hypothesis that can be coded using the least number of bits. We first propose an Occam’s Razor risk bound which will ultimately guide the learning algorithm.

In the case of zero-one loss, we can model the risk of the classifier as a binomial. Let be the the binomial tail associated with a classifier of (true) risk . Then is the probability that this classifier makes at most errors on a set of examples:

The binomial tail inversion then gives the largest risk value that a classifier can have while still having a probability of at least of observing at most errors out of examples (Langford, 2005; Blum and Langford, 2003):

From this definition, it follows that is the smallest upper bound, which holds with probability at least , on the true risk of any classifier with an observed empirical risk on a test set of examples:

Our starting point is the Occam’s razor bound of Langford (2005) which is a tighter version of the bound proposed by Blumer et al. (1987). It is also more general in the sense that it applies to any prior distribution over any countable class of classifiers.

Theorem 1 (Langford (2005)).

For any prior distribution over any countable class of classifiers, for any data-generating distribution , and for any , we have:

The proof (available in (Langford, 2005)) directly follows from a straightforward union bound argument and from the fact that . To apply this bound for conjunctions of decision stumps we thus need to choose a suitable prior for this class. Moreover, Theorem 1 is valid when . Consequently, we will use a subprior whose sum is .

In our case, decision-stumps’ conjunctions are specified in terms of the discrete-valued vectors and and the continuous-valued vector . We will see below that we will use a finite-precision bit string to specify the set of threshold values . Let us denote by the prior probability assigned to the conjunction described by . We choose a prior of the following form:

where is the prior probability assigned to string given that we have chosen and . Let be the set of all message strings that we can use given that we have chosen and . If denotes the set of all possible attribute index vectors and denotes the set of all binary direction vectors of dimension , we have that whenever and .

The reasons motivating this choice for the prior are the following. The first two factors come from the belief that the final classifier, constructed from the group of attributes specified by , should depend only on the number of attributes in this group. If we have complete ignorance about the number of decision stumps the final classifier is likely to have, we should choose for . However, we should choose a that decreases as we increase if we have reasons to believe that the number of decision stumps of the final classifier will be much smaller than . Since this is usually our case, we propose to use:

The third factor of gives equal prior probabilities for each of the two possible values of direction .

To specify the distribution of strings , consider the problem of coding a threshold value where is some predefined interval in which we are permitted to choose and where is an interval of “equally good” threshold values.333By a “good” threshold value, we mean a threshold value for a decision stump that would cover many negative examples and very few positive examples (see the learning algorithm). We propose the following diadic coding scheme for the identification of a threshold value that belongs to that interval. Let be the number of bits that we use for the code. Then, a code of bits specifies one value among the set of threshold values:

We denote by and , the respective a priori minimum and maximum values that the attribute can take. These values are obtained from the definition of data. Hence, for an attribute , given an interval of threshold values, we take the smallest number of bits such that there exists a threshold value in that falls in the interval . In that way, we will need at most bits to obtain a threshold value that falls in .

Hence, to specify the threshold for each decision stump , we need to specify the number of bits and a -bit string that identifies one of the threshold values in . The risk bound does not depend on how we actually code (for some receiver). It only depends on the a priori probabilities we assign to each possible realization of . We choose the following distribution:

(1)
(2)

where:

(3)

The sum over all the possible realizations of gives 1 since . Note that by giving equal a priori probability to each of the strings of length , we give no preference to any threshold value in .

The distribution that we have chosen for each string length has the advantage of decreasing slowly so that the risk bound does not deteriorate too rapidly as increases. Other choices are clearly possible. However, note that the dominant contribution comes from the term yielding a risk bound that depends linearly in .

With this choice of prior, we have the following theorem:

Theorem 2.

Given all our previous definitions and for any , we have:

Finally, we emphasize that the risk bound of Theorem 2, used in conjunction with the distribution of messages given by , provides a guide for choosing the optimal classifier. Note that the above risk bound suggests a non-trivial trade-off between the number of attributes and the length of the message string used to encode the classifier. Indeed the risk bound may be smaller for a conjunction having a large number of attributes with small message strings (i.e., small s) than for a conjunction having a small number of attributes but with large message strings.

Iv A Sample Compression Approach

The basic idea of the Sample compression framework (Kuzmin and Warmuth, 2007) is to obtain learning algorithms with the property that the generated classifier (with respect to some training data) can often be reconstructed with a very small subset of training examples. More formally, a learning algorithm is said to be a sample-compression algorithm iff there exists a compression function and a reconstruction function such that for any training sample (where ), the classifier returned by is given by:

For a training set , the compression function of learning algorithm outputs a subset of , called the compression set, and an information message , i.e., . The information message contains the additional information needed to reconstruct the classifier from the compression set . Given a training sample , we define the compression set by a vector of indices such that , with and and where denotes the number of indices present in .

When given an arbitrary compression set and an arbitrary information message , the reconstruction function of a learning algorithm must output a classifier. The information message is chosen from a set that consists of all the distinct messages that can be attached to the compression set . The existence of this reconstruction function assures that the classifier returned by is always identified by a compression set and an information message .

In sample compression settings for learning decision stumps’ conjunctions, the message string consists of the attributes and directions defined above. However, the thresholds are now specified by training examples. Hence, if we have attributes where is the set of thresholds, the compression set consists of training examples (one per threshold).

Our starting point is the following generic Sample Compression bound  (Marchand and Sokolova, 2005):

Theorem 3.

For any sample compression learning algorithm with a reconstruction function that maps arbitrary subsets of a training set and information messages to classifiers:

where

and is defined by Equation 3.

Now, we need to specify the distribution of messages () for the conjunction of decision stumps. Note that in order to specify a conjunction of decision stumps, the compression set consists of one example per decision stump. For each decision stump we have one attribute and a corresponding threshold value determined by the numerical value that this attribute takes on the training example.

The learner chooses an attribute whose threshold is identified by the associated training example. The set of these training examples form the compression set. Finally, the learner chooses a direction for each attribute.

The subset of attributes that specifies the decision stumps in our compression set is given by the vector defined in the previous section. Moreover, since there is one decision stump corresponding to each example in the compression set, we have . Now, we assign equal probability to each possible set of attributes (and hence thresholds) that can be selected from attributes. Moreover, we assign equal probability over the direction that each decision stump can have . Hence, we get the following distribution of messages:

(5)

Equation 5 along with the Sample Compression Theorem completes the bound for the conjunction of decision stumps.

V A PAC-Bayes Approach

The Occam’s Razor and Sample Compression, in a sense, aim at obtaining sparse classifiers with minimum number of stumps. This sparsity is enforced by selecting the classifiers with minimal encoding of the message strings and the compression set in respective cases.

We now examine if by sacrificing this sparsity in terms of a larger separating margin around the decision boundary (yielding more confidence) can lead us to classifiers with smaller generalization error. The learning algorithm is based on the PAC-Bayes approach (McAllester, 1999) that aims at providing Probably Approximately Correct (PAC) guarantees to “Bayesian” learning algorithms specified in terms of a prior distribution (before the observation of the data) and a data-dependent, posterior distribution over a space of classifiers.

We formulate a learning algorithm that outputs a stochastic classifier, called the Gibbs Classifier defined by a data-dependent posterior . Our classifier will be partly stochastic in the sense that we will formulate a posterior over the threshold values utilized by the decision stumps while still retaining the deterministic nature for the selected attributes and directions for the decision stumps.

Given an input example , the Gibbs classifier first selects a classifier according to the posterior distribution and then use to assign the label to . The risk of is defined as the expected risk of classifiers drawn according to :

Our starting point is the PAC-Bayes theorem (McAllester, 2003; Langford, 2005; Seeger, 2002) that provides a bound on the risk of the Gibbs classifier:

Theorem 4.

Given any space of classifiers. For any data-independent prior distribution over , we have:

where is the Kullback-Leibler divergence between distributions444Here denotes the probability density function associated with , evaluated at . and :

and where is the Kullback-Leibler divergence between the Bernoulli distributions with probabilities of success and :

This bound for the risk of Gibbs classifiers can easily be turned into a bound for the risk of Bayes classifiers over the posterior . basically performs a majority vote (under measure ) of binary classifiers in . When misclassifies an example , at least half of the binary classifiers (under measure ) misclassifies . It follows that the error rate of is at least half of the error rate of . Hence .

In our case, we have seen that decision stump conjunctions are specified in terms of a mixture of discrete parameters and and continuous parameters . If we denote by the probability density function associated with a prior over the class of decision stump conjunctions, we consider here priors of the form:

As before, we have that:

whenever .

The factors relating to the discrete components and have the same rationale as in the case of the Occam’s Razor approach. However, in the case of the threshold for each decision stumps, we now consider an explicitly continuous uniform prior. As in the Occam’s Razor case, we assume each attribute value to be constrained, a priori, in such that and are obtained from the definition of the data. Hence, we have chosen a uniform prior probability density on for each such that . This explains the last factors of .

Given a training set , the learner will choose an attribute group and a direction vector deterministically. We pose the problem of choosing the threshold in a similar manner as in the case of Occam’s Razor approach of Section III with the only difference that the learner identifies the interval and selects a threshold stochastically. For each attribute , a margin interval is chosen by the learner. A deterministic decision stump conjunction classifier is then specified by choosing the thresholds values uniformly. It is tempting at this point to choose (i.e., in the middle of each interval). However, the PAC-Bayes theorem offers a better guarantee for another type of deterministic classifier as we see below.

Hence, the Gibbs classifier is defined with a posterior distribution having all its weight on the same and as chosen by the learner but where each is uniformly chosen in . The KL divergence between this posterior and the prior is then given by:

In this limit when , it can be seen that the KL divergence between the “continuous components” of and vanishes. Furthermore, the KL divergence between the “discrete components” of and is small for small values of (whenever is not too small). Hence, this KL divergence between our choices for and exhibits a tradeoff between margins () and sparsity (small value of ) for Gibbs classifiers. Theorem 4 suggests that the with the smallest guarantee of risk should minimize a non trivial combination of and .

The posterior is identified by an attribute group vector , a direction vector , and intervals . We refine the notation for our Gibbs classifier to reflect this. Hence, we use where and are the vectors formed by the unions of s and s respectively. We can obtain a closed-form expression for by first considering the risk on a single example since . From our definition for , we find that:

(6)

where:

Note that the expression for is identical to the expression for except that the piece-wise linear functions are replaced by the indicator functions .

The PAC-Bayes theorem provides a risk bound for the Gibbs classifier . Since the Bayes classifier just performs a majority vote under the same posterior distribution as the one used by , it follows that:

(7)

Note that has an hyperbolic decision surface. Consequently, is not representable as a conjunction of decision stumps. There is, however, no computational difficulty at obtaining the output of for any . We now state our main theorem:

Theorem 5.

Given all our previous definitions, for any , and for any satisfying , we have, with probability atleast over random draws of :

where

Furthermore: .

Vi The Learning Algorithms

Having proposed the theoretical frameworks attempting to obtain the optimal classifiers based on various optimization criteria, we now detail the learning algorithms for these approaches. Ideally, we would like to find a conjunction of decision stumps that minimizes the respective risk bounds for each approach. Unfortunately, this cannot be done efficiently in all cases since this problem is at least as hard as the (NP-hard) minimum set cover problem as mentioned by Marchand and Shawe-Taylor (2002). Hence, we use a set covering greedy heuristic. It consists of choosing the decision stump with the largest utility where:

(8)

where is the set of negative examples covered (classified as 0) by feature , is the set of positive examples misclassified by this feature, and is a learning parameter that gives a penalty for each misclassified positive example. Once the feature with the largest is found, we remove and from the training set and then repeat (on the remaining examples) until either no more negative examples are present or that a maximum number of features has been reached. This heuristic was also used by Marchand and Shawe-Taylor (2002) in the context of a sample compression classifier called the set covering machine. For our sample compression approach (SC), we use the above utility function .

However, for the Occam’s Razor and the PAC-Bayes approaches, we need utility functions that can incorporate the optimization aspects suggested by these approaches.

Vi-a The Occam’s Razor learning algorithm

We propose the following learning strategy for Occam’s Razor learning of conjunctions of decision stumps. For a fixed and , let be the set of negative examples and be the set of positive examples. We start with and . Let be the subset of covered by decision stump , let be the subset of covered by decision stump , and let be the number of bits used to code the threshold of decision stump . We choose the decision stump that maximizes the utility defined as:

where is the penalty suffered by covering (and hence, misclassifying) a positive example and is the cost of using bits for decision stump . Once we have found a decision stump maximizing , we update and and repeat to find the next decision stump until either or the maximum number of decision stumps has been reached (early stopping the greedy). The best values for the learning parameters , and are determined by cross-validation.

Vi-B The PAC-Bayes Learning Algorithm

Theorem 5 suggests that the learner should try to find the Bayes classifier that uses a small number of attributes (i.e., a small ), each with a large separating margin , while keeping the empirical Gibbs risk at a low value. As discussed earlier, we utilize the greedy set covering heuristic for learning.

In our case, however, we need to keep the Gibbs risk on low instead of the risk of a deterministic classifier. Since the Gibbs risk is a “soft measure” that uses the piece-wise linear functions instead of the “hard” indicator functions, we cannot make use of the hard utility function of Equation 8. Instead, we need a “softer” version of this utility function to take into account covering (and erring on) an example partly. That is, a negative example that falls in the linear region of a is in fact partly covered and vice versa for the positive example.

Following this observation, let be the vector of indices of the attributes that we have used so far for the construction of the classifier. Let us first define the covering value of by the “amount” of negative examples assigned to class by :


We also define the positive-side error of as the “amount” of positive examples assigned to class :

We now want to add another decision stump on another attribute, call it , to obtain a new vector containing this new attribute in addition to those present in . Hence, we now introduce the covering contribution of decision stump as:

and the positive-side error contribution of decision stump as:

Typically, the covering contribution of decision stump should increase its “utility” and its positive-side error should decrease it. Moreover, we want to decrease the “utility” of decision stump by an amount which would become large whenever it has a small separating margin. Our expression for suggests that this amount should be proportional to . Furthermore we should compare this margin term with the fraction of the remaining negative examples that decision stump has covered (instead of the absolute amount of negative examples covered). Hence the covering contribution of decision stump should be divided by the amount of negative examples that remains to be covered before considering decision stump :

which is simply the amount of negative examples that have been assigned to class 1 by . If denotes the set of positive examples, we define the utility of adding decision stump to as:

where parameter represents the penalty of misclassifying a positive example and is another parameter that controls the importance of having a large margin. These learning parameters can be chosen by cross-validation. For fixed values of these parameters, the “soft greedy” algorithm simply consists of adding, to the current Gibbs classifier, a decision stump with maximum added utility until either the maximum number of decision stumps has been reached or all the negative examples have been (totally) covered. It is understood that, during this soft greedy algorithm, we can remove an example from whenever it is totally covered. This occurs whenever .

Hence, we use the above utility function for the PAC-Bayes learning strategy. Note that, in the case of and , we normalize the number of covered and erred examples so as to increase their sensitivity to the respective terms.

Vi-B1 Time Complexity Analysis

Let us analyze the time complexity of this algorithm for fixed and . For each attribute, we first sort the examples with respect to their values for the attribute under consideration. This takes time. Then, we examine each potential value (defined by the values of that attribute on the examples). Corresponding to each , we examine all the potential values (all the values greater than ). This gives us a time complexity of . Now if is the largest number of examples falling into , calculating the covering and error contributions and then finding the best interval takes time. Moreover, we allow giving us a time complexity of for each attribute. Finally, we do this over all the attributes. Hence, the overall time complexity of the algorithm is . Note, however, that for microarray data, we have (hence, we can consider to be a constant). Moreover once the best stump is found, we remove the examples covered by this stump from the training set and repeat the algorithm. Now, we know that greedy algorithms of this kind have the following guarantee: if there exist decision stumps that covers all the examples, the greedy algorithm will find at most decision stumps. Since we almost always have , the running time of the whole algorithm will almost always be . The good news is, since , the time complexity of our algorithm is roughly linear in .

Vi-B2 Fixed-Margin Heuristic

In order to show why we prefer a uniformly distributed threshold as opposed to the one fixed at the middle of the interval for each stump , we use an alternate algorithm that we call the fixed margin heuristic. The algorithm is similar to the one described above but with an additional parameter . This parameter decides a fixed margin boundary around the threshold, i.e. decides the length of the interval . The algorithm still chooses the attribute vector , the direction vector and the vectors and . However, the ’s and ’s for each stump are chosen such that, . The threshold is then fixed in the middle of this interval, that is . Hence, for each stump , the interval . For fixed and , a similar analysis as in the previous subsection yields a time complexity of for this algorithm.

Vii Empirical Results

Data Set SVM SVM+gs SVM+rfe Adaboost
Name ex Genes Errs Errs S Errs S Itrs Errs
Colon 62 2000 12.81.4 14.43.5 256 15.44.8 128 20 15.22.1
B_MD 34 7129 13.21 7.22.6 32 10.42.4 64 20 9.81.1
C_MD 60 7129 28.22.2 23.12.8 1024 28.22.2 7129 50 21.22.4
Leuk 72 7129 21.31.4 142.8 64 213.2 256 20 17.81.8
Lung 52 918 8.81.3 6.81.9 64 7.21.8 32 1 2.41.4
BreastER 49 7129 15.32.4 10.32.7 256 11.22.8 256 50 9.81.7
TABLE I: Results of SVM, SVM coupled with Golub’s feature selection algorithm (filter), SVM with Recursive Feature Elimination (wrapper) and Adaboost algorithms on Gene Expression datasets.
Data Set Occam SC
Name ex Genes Errs S bits Errs S
Colon 62 2000 23.61.2 1.8.6 6 18.21.8 1.2.6
B_MD 34 7129 17.21.8 1.2.8 3 17.21.3 1.4.8
C_MD 60 7129 28.61.8 2.61.1 4 29.21.1 1.2.6
Leuk 72 7129 27.81.7 2.2.8 6 27.31.7 1.4.7
Lung 52 918 21.71.1 1.81.2 5 181.3 1.2.5
BreastER 49 7129 25.41.2 3.2.6 2 21.21.5 1.4.5
TABLE II: Results of the proposed Occam’s Razor and Sample Compression learning algorithms on Gene Expression datasets.
Data Set PAC-Bayes
Name ex Genes S G-errs B-errs
Colon 62 2000 1.53.28 14.681.8 14.651.8
B_MD 34 7129 1.2.25 8.891.65 8.61.4
C_MD 60 7129 3.41.8 23.81.7 22.91.65
Leuk 72 7129 3.21.4 24.41.5 23.61.6
Lung 52 918 1.2.3 4.4.6 4.2.8
BreastER 49 7129 2.61.1 12.8.8 12.4.78
TABLE III: Results of the PAC-Bayes learning algorithm on Gene Expression datasets.

The proposed approaches for learning conjunctions of decision stumps were tested on the six real-world binary microarray datasets viz. the colon tumor (Alon et al., 1999), the Leukaemia (Golub et al., 1999), the B_MD and C_MD Medulloblastomas data (Pomeroy et al., 2002), the Lung (Garber et al., 2001), and the BreastER data (West et al., 2001).

The colon tumor data set (Alon et al., 1999) provides the expression levels of 40 tumor and 22 normal colon tissues measured for 6500 human genes. We use the set of 2000 genes identified to have the highest minimal intensity across the 62 tissues. The Leuk data set (Golub et al., 1999) provides the expression levels of 7129 human genes for 47 samples of patients with Acute Lymphoblastic Leukemia (ALL) and 25 samples of patients with Acute Myeloid Leukemia (AML). The B_MD and C_MD data sets (Pomeroy et al., 2002) are microarray samples containing the expression levels of 7129 human genes. Data set B_MD contains 25 classic and 9 desmoplastic medulloblastomas whereas data set C_MD contains 39 medulloblastomas survivors and 21 treatment failures (non-survivors). The Lung dataset consists of gene expression levels of 918 genes of 52 patients with 39 Adenocarcinoma and 13 Squamous Cell Cancer (Garber et al., 2001). This data has some missing values which were replaced by zeros. Finally, the BreastER dataset is the Breast Tumor data of  West et al. (2001) used with Estrogen Receptor status to label the various samples. The data consists of expression levels of 7129 genes of 49 patients with 25 positive Estrogen Receptor samples and 24 negative Estrogen Receptor samples.

The number of examples and the number of genes in each data are given in the “ex” and “Genes” columns respectively under the “Data Set” tab in each table. The algorithms are referred to as “Occam” (Occam’s Razor), “SC” (Sample Compression) and “PAC-Bayes” (PAC-Bayes) in Tables II to V. They utilize the respective theoretical frameworks discussed in Sections IIIIV and V along with the respective learning strategies of Section VI.

We have compared our learning algorithm with a linear-kernel soft-margin SVM trained both on all the attributes (gene expressions) and on a subset of attributes chosen by the filter method of Golub et al. (1999). The filter method consists of ranking the attributes as function of the difference between the positive-example mean and the negative-example mean and then use only the first attributes. The resulting learning algorithm, named SVM+gs is the one used by Furey et al. (2000) for the same task. Guyon et al. (2002) claimed obtaining better results with the recursive feature elimination method but, as pointed out by Ambroise and McLachlan (2002), their work contained a methodological flaw. We use the SVM recursive feature elimination algorithm with this bias removed and present these results as well for comparison (referred to as “SVM+rfe” in Table I). Finally, we also compare our results with the state-of-the-art Adaboost algorithm. For this, we use the implementation in the Weka data mining software (Witten and Frank, 2005).

Each algorithm was tested over 20 random permutations of the datasets, with the 5-fold cross validation (CV) method. Each of the five training sets and testing sets was the same for all algorithms. The learning parameters of all algorithms and the gene subsets (for “SVM+gs” and “SVM+rfe”) were chosen from the training sets only. This was done by performing a second (nested) 5-fold CV on each training set.

For the gene subset selection procedure of SVM+gs, we have considered the first genes (for ) ranked according to the criterion of Golub et al. (1999) and have chosen the value that gave the smallest 5-fold CV error on the training set. The “Errs” column under each algorithm in Tables I to III refer to the average (nested) 5-fold cross-validation error of the respective algorithm with one standard deviation two-sided confidence interval. The “bits” column in Table II refer to the number of bits used for the Occam’ Razor approach. The “G-errs” and the “B-errs” columns in Table III refer to the average nested 5-fold CV error of the optimal Gibbs classifier and the corresponding Bayes classifier with one standard deviation two-sided interval respectively.

For Adaboost, , , , , , , and iterations for each datasets were tried and the reported results correspond to the best obtained 5-fold CV error. The size values reported here (the “S” columns for “SVM+gs”and “SVM+rfe”, and “Itr” column for “AdaBoost” in Table I) correspond to the number of attributes (genes) selected most frequently by the respective algorithms over all the permutation runs.555There were no close ties with classifiers with fewer genes. Choosing, by cross-validation, the number of boosting iteration is somewhat inconsistent with Adaboost’s goal of minimizing the empirical exponential risk. Indeed, to comply with Adaboost’s goal, we should choose a large-enough number of boosting rounds that assures the convergence of the empirical exponential risk to its minimum value. However, as shown by Zhang and Yu (2005), Boosting is known to overfit when the number of attributes exceeds the number of examples. This happens in the case of microarray experiments frequently where the number of genes far exceeds the number of samples, and is also the case in the datasets mentioned above. Early stopping is the recommended approach in such cases and hence we have followed the method described above to obtained the best number of boosting iterations.

Further, Table IV gives the result for a single run of the deterministic algorithm using the fixed-margin heuristic described above. Table V gives the results for the PAC-Bayes bound values for the results obtained for a single run of the PAC-Bayes algorithm on the respective microarray data sets. Recall that the PAC-Bayes bound provides a uniform upper bound on the risk of the Gibbs classifier. The column labels refer to the same quantities as above although the errors reported are over a single nested 5-fold CV run. The “Ratio” column of Table V refers to the average value of obtained over the decision stumps used by the classifiers over testing folds and the “Bound” columns of Tables IV and V refer to the average risk bound of Theorem 5 multiplied by the total number of examples in respective data sets. Note, again, that these results are on a single permutation of the datasets and are presented just to illustrate the practicality of the risk bound and the rationale of not choosing the fixed-margin heuristic over the current learning strategy.

Data Set Stumps:PAC-Bayes(fixed margin)
Name ex Genes Size Errors Bound
Colon 62 2000 1 14 34
B_MD 34 7129 1 7 20
C_MD 60 7129 3 28 48
Leuk 72 7129 2 21 46
Lung 52 918 2 9 29
BreastER 49 7129 3 11 31
TABLE IV: Results of the PAC-Bayes Approach with Fixed-Margin Heuristic on Gene Expression Datasets.
Data Set Stumps:PAC-Bayes
Name ex Genes Ratio Size G-errs B-errs Bound
Colon 62 2000 0.42 1 12 11 33
B_MD 34 7129 0.10 1 7 7 20
C_MD 60 7129 0.08 5 21 20 45
Leuk 72 7129 0.002 3 22 21 48
Lung 52 918 0.12 1 3 3 18
BreastER 49 7129 0.09 2 11 11 29
TABLE V: An illustration of the PAC-Bayes risk bound on a sample run of the PAC-Bayes algorithm.

Vii-a A Note on the Risk Bound

Note that the risk bounds are quite effective and their relevance should not be misconstrued by observing the results in just the current scenario. One of the most limiting factor in the current analysis is the unavailability of microarray data with larger number of examples. As the number of examples increase, the risk bound of Theorem 5 gives tighter guarantees. Consider, for instance, if the datasets for the Lung and Colon Cancer had examples. A classifier with the same performance over 500 examples (i.e. with the same classification accuracy and number of features as currently) would have a bound of about 12 and 30 percent error instead of current 34.6 and 54.6 percent respectively. This only illustrates how the bound can be more effective as a guarantee when used on datasets with more examples. Similarly, a dataset of 1000 examples for Breast Cancer with a similar performance can have a bound of about 30 percent instead of current 63 percent. Hence, the current limitation in the practical application of the bound comes from limited data availability. As the number of examples increase, the bounds provides tighter guarantees and become more significant.

Viii Analysis

The results clearly show that even though “Occam” and “SC” are able to find sparse classifiers (with very few genes), they are not able to obtain acceptable classification accuracies. One possible explanation is that these two approaches focus on the most succinct classifier with their respective criterion. The Sample compression approach tries to minimize the number of genes used but does not take into account the magnitude of the separating margin and hence compromises accuracy. On the other hand, the Occam’s Razor approach tries to find a classifier that depends on margin only indirectly. Approaches based on sample compression as well as minimum description length have shown encouraging results in various domains. An alternate explanation for their suboptimal performance here can be seen in terms of extremely limited sample sizes. As a result, the gain in accuracy does not offset the cost of adding additional features in the conjunction. The PAC-Bayes approach seems to alleviate these problems by performing a significant margin-sparsity tradeoff. That is, the advantage of adding a new feature is seen in terms of a combination of the gain in both margin and the empirical risk. This can be compared to the strategy used by the regularization approaches. The classification accuracy of PAC-Bayes algorithm is competitive with the best performing classifier but has an added advantage, quite importantly, of using very few genes.

For the PAC-Bayes approach, we expect the Bayes classifier to generally perform better than the Gibbs classifier. This is reflected to some extent in the empirical results for Colon, C_MD and Leukaemia datasets. However, there is no means to prove that this will always be the case. It should be noted that there exist several different utility functions that we can use for each of the proposed learning approaches. We have tried some of these and reported results only for the ones that were found to be the best (and discussed in the description of the corresponding learning algorithms).

A noteworthy observation with regard to Adaboost is that the gene subset identified by this algorithm almost always include the ones found by the proposed PAC-Bayes approach for decision stumps. Most notably, the only gene Cyclin D1, a well known marker for Cancer, found for the lung cancer dataset is the most discriminating factor and is commonly found by both approaches. In both cases, the size of the classifier is almost always restricted to . These observations not only give insights into the absolute peaks worth investigating but also experimentally validates the proposed approaches.

Finally, many of the genes identified by the final666This is the classifier learned after choosing the best parameters using nested 5-fold CV and trained on the full dataset. PAC-Bayes classifier include some prominent markers for the corresponding diseases as detailed below.

Viii-a Biological Relevance of the Selected Features

Table VI details the genes identified by the final PAC-Bayes classifier learned over each dataset after the parameter selection phase. There are some prominent markers identified by the classifier. Some of the main genes identified by the PAC-Bayes approach are the ones identified by previous studies for each disease— giving confidence in the proposed approach. Some of the discovered genes in this case include Human monocyte-derived neutrophil-activating protein (MONAP) mRNA in the case of Colon Cancer dataset and oestrogen receptor in the case of Breast Cancer data, D79205_at-Ribosomal protein L39, D83542_at-Cadherin-15 and U29195_at-NPTX2 Neuronal pentraxin II in the case of Medulloblastomas datasets B_MD and C_MD. Other genes identified have biological relevance, for instance, the identification of Adipsin, LAF-4 and HOX1C with regard to ALL/AML by our algorithm is in agreement with that of the findings of Chow et al. (2001)Hiwatari et al. (2003) and Lawrence and Largman (1992) respectively and the studies that followed.

Dataset Gene(s) identified by PAC-Bayes Classifier
Colon 1. Hsa6̇27 M26383-Human monocyte-derived neutrophil-activating protein (MONAP) mRNA
B_MD 1. D79205_at-Ribosomal protein L39
C_MD 1. S71824_at-Neural Cell Adhesion Molecule, Phosphatidylinositol-Linked Isoform Precursor
2. D83542_at-Cadherin-15
3. U29195_at-NPTX2 Neuronal pentraxin II
4. X73358_s_at-HAES-1 mRNA
5. L36069_at-High conductance inward rectifier potassium channel alpha subunit mRNA
Leuk 1. M84526_at-DF D component of complement (adipsin)
2. U34360_at-Lymphoid nuclear protein (LAF-4) mRNA
3. M16937_at-Homeo box c1 protein, mRNA
Lung 1. GENE221X-IMAGE_841641-cyclin D1 (PRAD1-parathyroid adenomatosis 1) Hs8̇2932 AA487486
BreastER 1. X03635_at,X03635- class C, 20 probes, 20 in all_X03635 5885 - 6402
Human mRNA for oestrogen receptor
2. L42611_f_at, L42611- class A, 20 probes, 20 in L42611 1374-1954,
Homo sapiens keratin 6 isoform K6e mRNA, complete cds
TABLE VI: Genes Identified by the Final PAC-Bayes Classifier

Further, in the case of breast cancer, Estrogen receptors (ER) have shown to interact with BRCA1 to regulate VEGF transcription and secretion in breast cancer cells (Kawai et al., 2002). These interactions are further investigated by Ma et al. (2005). Further studies for ER have also been done. For instance, Moggs et al. (2005) discovered 3 putative estrogen-response elements in Keratin6 (the second gene identified by the PAC-Bayes classifier in the case of BreastER data) in the context of E2-responsive genes identified by microarray analysis of MDA-MD-231 cells that re-express ER. An important role played by cytokeratins in cancer development is also widely known (see for instance Gusterson et al. (2005)).

Furthermore, the importance of MONAP in the case of colon cancer and Adipsin in the case of leukaemia data has further been confirmed by various rank based algorithms as detailed by Su et al. (2003) in the implementation of “RankGene”, a program that analyzes and ranks genes for the gene expression data using eight ranking criteria including Information Gain (IG), Gini Index (GI), Max Minority (MM), Sum Minority (SM), Twoing Rule (TR), t-statistic (TT), Sum of variances (SV) and one-dimensional Support Vector Machine (1S). In the case of Colon Cancer data, MONAP is identified as the top ranked gene by four of the eight criteria (IG, SV, TR, GI), second by one (SM), eighth by one (MM) and in top 50 by 1S. Similarly, in the case of Leukaemia data, Adipsin is top ranked by 1S, fifth by SM, seventh by IG, SV, TR, GI and MM and is in top 50 by TT. These observations provides a strong validation for our approaches.

Cyclin as identified in the case of Lung Cancer dataset is a well known marker for cell division whose perturbations are considered to be one of the major factors causing cancer (Driscoll et al., 1999; Masaki et al., 2003).

Finally, the discovered genes in the case of Medulloblastomas are important with regard to the neuronal functioning (esp. S71824, U29195 and L36039) and can have relevance for nervous system related tumors.

Ix Conclusion

Learning from high-dimensional data such as that from DNA microarrays can be quite challenging especially when the aim is to identify only a few attributes that characterizes the differences between two classes of data. We investigated the premise of learning conjunctions of decision stumps and proposed three formulations based on different learning principles. We observed that the approaches that aim solely to optimize sparsity or the message code with regard to the classifier’s empirical risk limits the algorithm in terms of its generalization performance, at least in the present case of small dataset sizes. By trading-off the sparsity of the classifier with the separating margin in addition to the empirical risk, the PAC-Bayes approach seem to alleviate this problem to a significant extent. This allows the PAC-Bayes algorithm to yield competitive classification performance while at the same time utilizing significantly fewer attributes.

As opposed to the traditional feature selection methods, the proposed approaches are accompanied by a theoretical justification of the performance. Moreover, the proposed algorithms embed the feature selection as a part of the learning process itself.777Note that Huang and Chang (2007) proposed one such approach. However, they need multiple SVM learning runs. Hence, their method basically works as a wrapper. Furthermore, the generalization error bounds are practical and can potentially guide the model (parameter) selection. When applied to classify DNA microarray data, the genes identified by the proposed approaches are found to be biologically significant as experimentally validated by various studies, an empirical justification that the approaches can successfully perform meaningful feature selection. Consequently, this represents a significant improvement in the direction of successful integration of machine learning approaches for use in high-throughput data to provide meaningful, theoretically justifiable, and reliable results. Such approaches that yield a compressed view in terms of a small number of biological markers can lead to a targeted and well focussed study of the issue of interest. For instance, the approach can be utilized in identifying gene subsets from the microarray experiments that should be further validated using focused RT-PCR techniques which are otherwise both costly and impractical to perform on the full set of genes.

Finally, as mentioned previously, the approaches presented in this wor have a wider relevance, and can have significant implications in the direction of designing theoretically justified feature selection algorithms. These are one of the few approaches that combines the feature selection with the learning process and provide generalization guarantees over the resulting classifiers simultaneously. This property assumes even more significance in the wake of limited size of microarray datasets since it limits the amount of empirical evaluation that can be reliably performed otherwise. Most natural extensions of the approaches and the learning bias proposed here would be in other similar domains including other forms of microarray experiments such as Chromatin Immunoprecipitation promoter arrays (chIP-Chip) and from Protein arrays. Within the same learning settings, other learning biases can also be explored such as classifiers represented by features or sets of features built on subsets of attributes.

Acknowledgment

This work was supported by the National Science and Engineering Research Council (NSERC) of Canada [Discovery Grant No. 122405 to MM], the Canadian Institutes of Health Research [operating grant to JC, training grant to MS while at CHUL] and the Canada Research Chair in Medical Genomics to JC.

References

  • Alon et al. [1999] U. Alon, N. Barkai, D.A. Notterman, K. Gish, S. Ybarra, D. Mack, and A.J. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Natl. Acad. Sci. USA, 96(12):6745–6750, 1999.
  • Ambroise and McLachlan [2002] C. Ambroise and G. J. McLachlan. Selection bias in gene extraction on the basis of microarray gene-expression data. Proc. Natl. Acad. Sci. USA, 99(10):6562–6566, 2002.
  • Blum and Langford [2003] Avrim Blum and John Langford. PAC-MDL bounds. In Proceedings of 16th Annual Conference on Learning Theory, COLT 2003, Washington, DC, August 2003, volume 2777 of Lecture Notes in Artificial Intelligence, pages 344–357. Springer, Berlin, 2003.
  • Blumer et al. [1987] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. Warmuth. Occam’s razor. Information Processing Letters, 24:377–380, 1987.
  • Chow et al. [2001] M. L. Chow, E. J. Moler, and I. S. Mian. Identifying marker genes in transcription profiling data using a mixture of feature relevance experts. Phsiol Genomics, 5(2):99–111, 2001.
  • Driscoll et al. [1999] B. Driscoll, S. Buckley, L. Barsky, K. Weinberg, K. D. Anderson, and D. Warburton. Abrogation of cyclin D1 expression predisposes lung cancer cells to serum deprivation-induced apoptosis. Am J Phsiol, 276(4 Pt 1):L679–687, 1999.
  • Eisen and Brown [1999] M. Eisen and P. Brown. DNA arrays for analysis of gene expression. Methods Enzymology, 303:179–205, 1999.
  • Furey et al. [2000] T. S. Furey, N. Cristianini, N. Duffy, D. W. Bednarski, M. Schummer, and D. Haussler. Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics, 16:906–914, 2000.
  • Garber et al. [2001] M. E. Garber, O. G. Troyanskaya, K. Schluens, S. Petersen, Z. Thaesler, M. Pacyna-Gengelbach, M. van de Rijn, G. D. Rosen, C. M. Perou, R. I. Whyte, R. B. Altman, P. O. Brown, D. Botstein, and I. Petersen. Diversity of gene expression in adenocarcinoma of the lung. Proc. Natl. Acad. Sci. USA, 98(24):13784–13789, 2001.
  • Golub et al. [1999] T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science, 286(5439):531–537, 1999.
  • Gusterson et al. [2005] B. A. Gusterson, D. T. Ross, V. J. Heath, and T. Stein. Basal cytokeratins and their relationship to the cellular origin and functional classification of breast cancer. Breast Cancer Research, 7:143–148, 2005.
  • Guyon et al. [2002] Isabelle Guyon, Jason Weston, Stephen Barnhill, and Vladimir Vapnik. Gene selection for cancer classification using support vector machines. Machine Learning, 46:389–422, 2002.
  • Haussler [1988] D. Haussler. Quantifying inductive bias: AI learning algorithms and Valiant’s learning framework. Artificial Intelligence, 36:177–221, 1988.
  • Hiwatari et al. [2003] Mitsuteru Hiwatari, Tomohiko Taki, Takeshi Taketani, Masafumi Taniwaki, Kenichi Sugita, Mayuko Okuya, Mitsuoki Eguchi, Kohmei Ida, and Yasuhide Hayashi. Fusion of an AF4-related gene, LAF4, to MLL in childhood acute lymphoblastic leukemia with t(2;11)(q11;q23). Oncogene, 22(18):2851–2855, 2003.
  • Huang and Chang [2007] H. Huang and F. Chang. ESVM: Evolutionary support vector machine for automatic feature selection and classification of microarray data. Biosystems, 90(2):516–528, 2007.
  • Kawai et al. [2002] H. Kawai, H. Li, P. Chun, S. Avraham S, and H. K. Avraham. Direct interaction between BRCA1 and the estrogen receptor regulates vascular endothelial growth factor (VEGF) transcription and secretion in breast cancer cells. Oncogene, 21(50):7730–7739, 2002.
  • Kuzmin and Warmuth [2007] Dima Kuzmin and Manfred K. Warmuth. Unlabeled compression schemes for maximum classes. J. Mach. Learn. Res., 8:2047–2081, 2007. ISSN 1533-7928.
  • Langford [2005] John Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 3:273–306, 2005.
  • Lawrence and Largman [1992] H. J. Lawrence and C. Largman. Homeobox genes in normal hematopoiesis and leukemia. Blood, 80(10):2445–2453, 1992.
  • Lipshutz et al. [1999] R. Lipshutz, S. Fodor, T. Gingeras, and D. Lockhart. High density synthetic oligonucleotide arrays. Nature Genetics, 21(1 Suppl):20–24, 1999.
  • Ma et al. [2005] Y. X. Ma, Y. Tomita, S. Fan, K. Wu, Y. Tong, Z. Zhao, L. N. Song, I. D. Goldberg, and E. M. Rosen. Structural determinants of the BRCA1 : estrogen receptor interaction. Oncogene, 24(11):1831–1846, 2005.
  • Marchand and Shah [2005] Mario Marchand and Mohak Shah. PAC-bayes learning of conjunctions and classification of gene-expression data. In Lawrence K. Saul, Yair Weiss, and Léon Bottou, editors, Advances in Neural Information Processing Systems 17, pages 881–888. MIT Press, Cambridge, MA, 2005.
  • Marchand and Shawe-Taylor [2002] Mario Marchand and John Shawe-Taylor. The set covering machine. Journal of Machine Learning Reasearch, 3:723–746, 2002.
  • Marchand and Sokolova [2005] Mario Marchand and Marina Sokolova. Learning with decision lists of data-dependent features. Journal of Machine Learning Reasearch, 6:427–451, 2005.
  • Masaki et al. [2003] T. Masaki, Y. Shiratori, W. Rengifo, K. Igarashi, M. Yamagata, K. Kurokohchi, N. Uchida, Y. Miyauchi, H. Yoshiji, S. Watanabe, M. Omata, and S. Kuriyama. Cyclins and cyclin-dependent kinases: Comparative study of hepatocellular carcinoma versus cirrhosis. Hepatology, 37(3):534–543, 2003.
  • McAllester [2003] David McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51:5–21, 2003. A priliminary version appeared in proceedings of COLT’99.
  • McAllester [1999] David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355–363, 1999.
  • Moggs et al. [2005] J. G. Moggs, T. C. Murphy, F. L. Lim, D. J. Moore, R. Stuckey, K. Antrobus, I. Kimber, and G. Orphanides. Anti-proliferative effect of estrogen in breast cancer cells that re-express ERalpha is mediated by aberrant regulation of cell cycle genes. Journal of Molecular Endocrinology, 34:535–551, 2005.
  • Pomeroy et al. [2002] S. L. Pomeroy, P. Tamayo, M. Gaasenbeek, L. M. Sturla, M. Angelo, M. E. McLaughlin, J. Y. Kim, L. C. Goumnerova, P. M. Black, C. Lau, J. C. Allen, D. Zagzag, J. M. Olson, T. Curran, C. Wetmore, J. A. Biegel, T. Poggio, S. Mukherjee, R. Rifkin, A. Califano, G. Stolovitzky, D. N. Louis, J. P. Mesirov, E. S. Lander, and T. R. Golub. Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature, 415(6870):436–442, 2002.
  • Seeger [2002] Matthias Seeger. PAC-Bayesian generalization bounds for gaussian processes. Journal of Machine Learning Research, 3:233–269, 2002.
  • Shah and Corbeil [2010] Mohak Shah and Jacques Corbeil. A general framework for analyzing data from two short time-series microarray experiments. IEEE/ACM Transactions on Computational Biology and Bioinformatics, to appear, 2010. doi: http://doi.ieeecomputersociety.org/10.1109/TCBB.2009.51.
  • Song et al. [2007] L. Song, J. Bedo, K. M. Borgwardt, A. Gretton, and A. Smola. Gene selection via the BAHSIC family of algorithms. Bioinformatics, 23(13):490–498, 2007.
  • Su et al. [2003] Yang Su, T.M. Murali, Vladimir Pavlovic, Michael Schaffer, and Simon Kasif. RankGene: identification of diagnostic genes based on expression data. Bioinformatics, 19(12):1578–1579, 2003.
  • Tibshirani et al. [2003] R. Tibshirani, T. Hastie, B. Narasimhan, and G. Chu. Class predicition by nearest shrunken centroids with applications to dna microarrays. Statistical Science, 18:104–117, 2003.
  • Wang et al. [2007] Lipo Wang, Feng Chu, and Wei Xie. Accurate cancer classification using expressions of very few genes. IEEE/ACM Trans. Comput. Biol. Bioinformatics, 4(1):40–53, 2007. ISSN 1545-5963. doi: http://dx.doi.org/10.1109/TCBB.2007.1006.
  • West et al. [2001] M. West, C. Blanchette, H. Dressman, E. Huang, S. Ishida, R. Spang, H. Zuzan, J. A. Olson Jr, J. R. Marks, and J. R. Nevins. Predicting the clinical status of human breast cancer by using gene expression profiles. Proc. Natl. Acad. Sci. USA, 98(20):11462–11467, 2001.
  • Witten and Frank [2005] Ian H. Witten and Eibe Frank. Data Mining: Practical machine learning tools and techniques, 2nd Ed. Morgan Kaufmann, San Francisco, 2005.
  • Zhang and Yu [2005] T. Zhang and B. Yu. Boosting with early stopping: Convergence and consistency. The Annals of Statistics, 33:1538–1579, 2005.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...