C-mix: a high dimensional mixture model for censored durations, with applications to genetic data

C-mix: a high dimensional mixture model for censored durations, with applications to genetic data

Abstract

We introduce a supervised learning mixture model for censored durations (C-mix) to simultaneously detect subgroups of patients with different prognosis and order them based on their risk. Our method is applicable in a high-dimensional setting, i.e. with a large number of biomedical covariates. Indeed, we penalize the negative log-likelihood by the Elastic-Net, which leads to a sparse parameterization of the model and automatically pinpoints the relevant covariates for the survival prediction. Inference is achieved using an efficient Quasi-Newton Expectation Maximization (QNEM) algorithm, for which we provide convergence properties. The statistical performance of the method is examined on an extensive Monte Carlo simulation study, and finally illustrated on three publicly available genetic cancer datasets with high-dimensional covariates. We show that our approach outperforms the state-of-the-art survival models in this context, namely both the CURE and Cox proportional hazards models penalized by the Elastic-Net, in terms of C-index, AUC() and survival prediction. Thus, we propose a powerfull tool for personalized medicine in cancerology.
Keywords. Cox’s proportional hazards model; CURE model; Elastic-net regularization; High-dimensional estimation; Mixture duration model; Survival analysis

Simon Bussy
Theoretical and Applied Statistics Laboratory
Pierre and Marie Curie University, Paris, France
email: simon.bussy@gmail.com

Agathe Guilloux
LaMME, UEVE and UMR 8071
Paris Saclay University, Evry, France
email: agathe.guilloux@math.cnrs.fr

Stéphane Gaïffas
CMAP Ecole polytechnique, Palaiseau, France
and LPMA, UMR CNRS 7599, Paris Diderot University, Paris, France
email: stephane.gaiffas@polytechnique.edu

Anne-Sophie Jannot
Biomedical Informatics and Public Health Department
European Georges Pompidou Hospital, Assistance Publique-Hôpitaux de Paris
and INSERM UMRS 1138, Centre de Recherche des Cordeliers, Paris, France
email: annesophie.jannot@aphp.fr

1Introduction

Predicting subgroups of patients with different prognosis is a key challenge for personalized medicine, see for instance [1] and [27] where subgroups of patients with different survival rates are identified based on gene expression data. A substantial number of techniques can be found in the literature to predict the subgroup of a given patient in a classification setting, namely when subgroups are known in advance [15]. We consider in the present paper the much more difficult case where subgroups are unknown.

In this situation, a first widespread approach consists in first using unsupervised learning techniques applied on the covariates – for instance on the gene expression data [6] – to define subsets of patients and then estimating the risks in each of them. The problem of such techniques is that there is no guarantee that the identified subgroups will have different risks. Another approach to subgroups identification is conversely based exclusively on the survival times: patients are then assigned to a “low-risk” or a “high-risk” group based on whether they were still alive [29]. The problem here is that the resulting subgroups may not be biologically meaningful since the method do not use the covariates, and prognosis prediction based on covariates is not possible.

The method we propose uses both the survival information of the patients and its covariates in a supervised learning way. Moreover, it relies on the idea that exploiting the subgroups structure of the data, namely the fact that a portion of the population have a higher risk of early death, could improve the survival prediction of future patients (unseen during the learning phase).

We propose to consider a mixture of event times distributions in which the probabilities of belonging to each subgroups are driven by the covariates (e.g. gene expression data, patients characteristics, therapeutic strategy or omics covariates). Our C-mix model is hence part of the class of model-based clustering algorithms, as introduced in [3].

More precisely, to model the heterogeneity within the patient population, we introduce a latent variable and our focus is on the conditional distribution of given the values of the covariates . Now, conditionally on the latent variable , the distribution of duration time is different, leading to a mixture in the event times distribution.

For a patient with covariates , the conditional probabilities of belonging to the -th risk group can be seen as scores, that can help decision-making for physicians. As a byproduct, it can also shed light on the effect of the covariates (which combination of biomedical markers are relevant to a given event of interest).

Our methodology differs from the standard survival analysis approaches in various ways, that we describe in this paragraph. First, the Cox proportional hazards (PH) model ([10]) (by far the most widely used in such a setting) is a regression model that describes the relation between intensity of events and covariates via

where is a baseline intensity, and is a vector quantifying the multiplicative impact on the hazard ratio of each covariate. As in our model, high-dimensional covariates can be handled, via e.g. penalization , see [30]. But it does not permit the stratification of the population in groups of homogeneous risks, hence does no deliver a simple tool for clinical practice. Moreover, we show in the numerical sections that the C-mix model can be trained very efficiently in high dimension, and outperforms the standard Cox PH model by far in the analysed datasets.

Other models condiser mixtures of event times distributions. In the CURE model (see [14] and [23]), one fraction of the population is considered as cured (hence not subject to any risk). This can be very limitating, as for a large number of applications (e.g. rehospitalization for patients with chronic diseases or relapse for patients with metastatic cancer), all patients are at risk. We consider, in our model, that there is always an event risk, no matter how small. Other mixture models have been considered in survival analysis: see [24] for a general study about mixture model for survival data or [11] in a cancer survival analysis setting, to name but a few. Unlike our algorithm, none of these algorithms considers the high dimensional setting.

A precise description of the model is given in Section 2. Section ? focuses on the regularized version of the model with an Elastic-Net penalization to exploit dimension reduction and prevent overfitting. Inference is presented under this framework, as well as the convergence properties of the developed algorithm. Section ? highlights the simulation procedure used to evaluate the performances and compares it with state-of-the-art models. In Section ?, we apply our method to genetic datasets. Finally, we discuss the obtained results in Section ?.

2A censored mixture model

Let us present the survival analysis framework. We assume that, the conditional density of the duration given is a mixture

of densities , for and some parameters to estimate. The weights combining these distributions depend on the patient biomedical covariates and are such that

This is equivalent to saying that conditionally on a latent variable , the density of at time is , and we have

where denotes a vector of coefficients that quantifies the impact of each biomedical covariates on the probability that a patient belongs to the -th group. Consider a logistic link function for these weights given by

The hidden status has therefore a multinomial distribution . The intercept term is here omitted without loss of generality.

In practice, information loss occurs of right censoring type. This is taken into acount in our model by introducing the following: a time when the individual “leaves” the target cohort, a right-censored duration and a censoring indicator , defined by

where denotes the minimum between two numbers and , and denotes the indicator function.

In order to write a likelihood and draw inference, we make the two following hypothesis.

Hypothesis ? is classical in survival analysis [22], while Hypothesis ? is classical in survival mixture models [24]. Under this hypothesis, denoting the density of the censoring , the cumulative distribution function corresponding to a given density , and , we have

Then, denoting the parameters to infer and considering an independent and identically distributed (i.i.d.) cohort of patients , the log-likelihood of the C-mix model can be written

where we use the notations and . Note that from now on, all computations are done conditionally on the covariates . An important fact is that we do not need to know or parametrize nor , namely the distribution of the censoring, for inference in this model (since all and terms vanish in Equation ).

3Inference of C-mix

In this section, we describe the procedure for estimating the parameters of the C-mix model. We begin by presenting the Quasi-Newton Expectation Maximization (QNEM) algorithm we use for inference. We then focus our study on the convergence properties of the algorithm.

3.1QNEM algorithm

In order to avoid overfitting and to improve the prediction power of our model, we use Elastic-Net regularization [40] by minimizing the penalized objective

where we add a linear combination of the lasso () and ridge (squared ) penalties for a fixed , tuning parameter , and where we denote the -norm of . One advantage of this regularization method is its ability to perform model selection (the lasso part) and pinpoint the most important covariates relatively to the prediction objective. On the other hand, the ridge part allows to handle potential correlation between covariates [40]. Note that in practice, the intercept is not regularized.

In order to derive an algorithm for this objective, we introduce a so-called Quasi-Newton Expectation Maximization (QNEM), being a combination between an EM algorithm [12] and a L-BFGS-B algorithm [39]. For the EM part, we need to compute the negative completed log-likelihood (here scaled by ), namely the negative joint distribution of , and . It can be written

Suppose that we are at step of the algorithm, with current iterate denoted . For the E-step, we need to compute the expected log-likelihood given by

We note that

with

so that is obtained from by replacing the two occurrences with . Depending on the chosen distributions , the M-step can either be explicit for the updates of (see Section 3.3 below for the geometric distributions case), or obtained using a minimization algorithm otherwise.

Let us focus now on the update of in the M-step of the algorithm. By denoting

the quantities involved in that depend on , the update for therefore requires to minimize

The minimization Problem is a convex problem. It looks like the logistic regression objective, where labels are not fixed but softly encoded by the expectation step (computation of above, see Equation ).

We minimize using the well-known L-BFGS-B algorithm [39]. This algorithm belongs to the class of quasi-Newton optimization routines, which solve the given minimization problem by computing approximations of the inverse Hessian matrix of the objective function. It can deal with differentiable convex objectives with box constraints. In order to use it with penalization, which is not differentiable, we use the trick borrowed from [2]: for , write , where and are respectively the positive and negative part of , and add the constraints and . Namely, we rewrite the minimization problem as the following differentiable problem with box constraints

where . The L-BFGS-B solver requires the exact value of the gradient, which is easily given by

In Algorithm ?, we describe the main steps of the QNEM algorithm to minimize the function given in Equation .

The penalization parameters are chosen using cross-validation, see Section ? of Appendices for precise statements about this procedure and about other numerical details.

3.2Convergence to a stationary point

We are addressing here convergence properties of the QNEM algorithm described in Section ? for the minimization of the objective function defined in Equation . Let us denote

Convergence properties of the EM algorithm in a general setting are well known, see [37]. In the QNEM algorithm, since we only improve instead of a minimization of , we are not in the EM algorithm setting but in a so called generalized EM (GEM) algorithm setting [12]. For such an algorithm, we do have the descent property, in the sense that the criterion function given in Equation is reduced at each iteration, namely

Let us make two hypothesis.

Under Hypothesis ?, decreases monotically to some finite limit. By adding Hypothesis ?, convergence of the QNEM algorithm to a stationary point can be shown. In particular, the stationary point is here a local minimum.

A proof is given in Section ? of Appendices.

3.3Parameterization

Let us discuss here the parametrization choices we made in the experimental part. First, in many applications - including the one addressed in Section ? - we are interested in identifying one subgroup of the population with a high risk of adverse event compared to the others. Then, in the following, we consider where means high-risk of early death and means low risk. Moreover, in such a setting where , one can compare the learned groups by the C-mix and the ones learned by the CURE model in terms of survival curves (see Figure ?).

To simplify notations and given the constraint formulated in Equation 2, we set and we denote and the conditional probability that a patient belongs to the group with high risk of death, given its covariates .

In practice, we deal with discrete times in days. It turns out that the times of the data used for applications in Section ? is well fitted by Weibull distributions. This choice of distribution is very popular in survival analysis, see for instance [22]. We then first derive the QNEM algorithm with

with here , being the scale parameter and the shape parameter of the distribution.

As explained in the following Section ?, we select the best model using a cross-validation procedure based on the C-index metric, and the performances are evaluated according to both C-index and AUC() metrics (see Sections ? for details). Those two metrics have the following property: if we apply any mapping on the marker vector (predicted on a test set) such that the order between all vector coefficient values is conserved, then both C-index and AUC() estimates remain unchanged. In other words, by denoting the vector of markers predicted on a test set of individuals, if is a function such that for all , then both C-index and AUC() estimates induced by or by are the same.

The order in the marker coefficients is actually paramount when the performances are evaluated according to the mentioned metrics. Furthermore, it turns out that empirically, if we add a constraint on the mixture of Weibull that enforces an order like relation between the two distributions and , the performances are improved. To be more precise, the constraint to impose is that the two density curves do not intersect. We then choose to impose the following: the two scale parameters are equal, . Indeed under this hypothesis, we do have that for all .

With this Weibull parameterization, updates for are not explicit in the QNEM algorithm, and consequently require some iterations of a minimization algorithm. Seeking to have explicit updates for , we then derive the algorithm with geometric distributions instead of Weibull (geometric being a particular case of Weibull with ), namely with .

With this parameterization, we obtain from Equation

which leads to the following explicit M-step

In this setting, implementation is hence straightforward. Note that Hypothesis ? and ? are immediately satisfied with this geometric parameterization.

In Section ?, we note that performances are similar for the C-mix model with Weibull or geometric distributions on all considered biomedical datasets. The geometric parameterization leading to more straightforward computations, it is the one used to parameterize the C-mix model in what follows, if not otherwise stated. Let us focus now on the performance evaluation of the C-mix model and its comparison with the Cox PH and CURE models, both regularized with the Elastic-Net.

4Performance evaluation

In this section, we first briefly introduce the models we consider for performance comparisons. Then, we provide details regarding the simulation study and data generation. The chosen metrics for evaluating performances are then presented, followed by the results.

4.1Competing models

The first model we consider is the Cox PH model penalized by the Elastic-Net, denoted Cox PH in the following. In this model introduced in [10], the partial log-likelihood is given by

We use respectively the R packages survival and glmnet [30] for the partial log-likelihood and the minimization of the following quantity

where is chosen by the same cross-validation procedure than the C-mix model, for a given (see Section ? of Appendices. Ties are handled via the Breslow approximation of the partial likelihood [9].

We remark that the model introduced in this paper cannot be reduced to a Cox model. Indeed, the C-mix model intensity can be written (in the geometric case)

while it is given by Equation in the Cox model.

Finally, we consider the CURE [14] model penalized by the Elastic-Net and denoted CURE in the following, with a logistic function for the incidence part and a parametric survival model for , where means that patient is cured, means that patient is not cured, and denotes the survival function. In this model, we then have constant and equal to 1. We add an Elastic-Net regularization term, and since we were not able to find any open source package where CURE models were implemented with a regularized objective, we used the QNEM algorithm in the particular case of CURE model. We just add the constraint that the geometric distribution corresponding to the cured group of patients () has a parameter , which does not change over the algorithm iterations. The QNEM algorithm can be used in this particular case, were some terms have disapeared from the completed log-likelihood, since in the CURE model case we have . Note that in the original introduction of the CURE model in [14], the density of uncured patients directly depends on individual patient covariates, which is not the case here.

We also give additional simulation settings in Section ? of Appendices. First, the case where , including a comparison of the screening strategy we use in Section ? with the iterative sure independence screening [13] (ISIS) method. We also add simulations where data is generated according to the C-mix model with gamma distributions instead of geometric ones, and include the accelerated failure time model [36] (AFT) in the performances comparison study.

4.2Simulation design

In order to assess the proposed method, we perform an extensive Monte Carlo simulation study. Since we want to compare the performances of the 3 models mentioned above, we consider 3 simulation cases for the time distribution: one for each competing model. We first choose a coefficient vector , with being the value of the active coefficients and a sparsity parameter. For a desired low-risk patients proportion , the high-risk patients index set is given by

where denotes the largest integer less than or equal to . For the generation of the covariates matrix, we first take , with a Toeplitz covariance matrix [25] with correlation . We then add a gap value for patients and subtract it for patients , only on active covariates plus a proportion of the non-active covariates considered as confusion factors, that is

.

Note that this is equivalent to generate the covariates according to a gaussian mixture.

Then we generate in the C-mix or CURE simulation case, where is computed given Equation , with geometric distributions for the durations (see Section 3.3). We obtain in the C-mix case, and in the CURE case. For the Cox PH model, we take , with and where stands for the uniform distribution on a segment .

The distribution of the censoring variable is geometric , with . The parameter is tuned to maintain a desired censoring rate , using a formula given in Section ? of Appendices. The values of the chosen hyper parameters are sumarized in Table ?.

Note that when simulating under the CURE model, the proportion of censored time events is at least equal to : we then choose for the CURE simulations only.

Finally, we want to assess the stability of the C-mix model in terms of variable selection and compare it to the CURE and Cox PH models. To this end, we follow the same simulation procedure explained in the previous lines. For each simulation case, we make vary the two hyper-parameters that impact the most the stability of the variable selection, that is the gap varying in and the confusion rate varying in . All other hyper-parameters are the same than in Table ?, except and with the choice . For a given hyper-parameters configuration (gap, ), we use the following approach to evaluate the variable selection power of the models. Denoting , if we consider that is the predicted probability that the true equals , then we are in a binary prediction setting and we use the resulting AUC of this problem. Explicit examples of such AUC computations are given in Section ? of Appendices.

4.3Metrics

We detail in this section the metrics considered to evaluate risk prediction performances. Let us denote by the marker under study. Note that in the C-mix and the CURE model cases, and in the Cox PH model case. We denote by the probability density function of marker , and assume that the marker is measured once at .

For any threshold , cumulative true positive rates and dynamic false positive rates are two functions of time respectively defined as and . Then, as introduced in [19], the cumulative dynamic time-dependent AUC is defined as follows

that we simply denote AUC() in the following. We use the Inverse Probability of Censoring Weighting (IPCW) estimate of this quantity with a Kaplan-Meier estimator of the conditional survival function , as proposed in [7] and already implemented in the R package timeROC.

A common concordance measure that does not depend on time is the C-index [16] defined by

with two independent patients (which does not depend on under the i.i.d. sample hypothesis). In our case, is subject to right censoring, so one would typically consider the modified defined by

with corresponding to the fixed and prespecified follow-up period duration [18]. A Kaplan-Meier estimator for the censoring distribution leads to a nonparametric and consistent estimator of [34], already implemented in the R package survival.

Hence in the following, we consider both AUC() and C-index metrics to assess performances.

4.4Results of simulation

We present now the simulation results concerning the C-index metric in the case in Table ?. See Section ? of Appendices for results on other configurations for . Each value is obtained by computing the C-index average and standard deviation (in parenthesis) over 100 simulations. The average (bold line) and standard deviation (bands) over the same 100 simulations are then given in Figure ?, where . Note that the value of the gap can be viewed as a difficulty level of the problem, since the higher the value of the gap, the clearer the separation between the two populations (low risk and high risk patients).

The results measured both by AUC() and C-index lead to the same conclusion: the C-mix model almost always leads to the best results, even under model misspecification, when data is generated according to the CURE or Cox PH model. Namely, under CURE simulations, C-mix and CURE give very close results, with a strong improvement over Cox PH. Under Cox PH and C-mix simulations, C-mix outperforms both Cox PH and CURE. Surprisingly enough, this exhibits a strong generalization property of the C-mix model, over both Cox PH and CURE. Note that this phenomenon is particularly strong for small gap values, while with an increasing gap (or an increasing sample size ), all procedures barely exhibit the same performance. It can be first explained by the non parametric baseline function in the Cox PH model, and second by the fact that unlike the Cox PH model, the C-mix and CURE models exploit directly the mixture aspect.

Finally, Figure ? gives the results concerning the stability of the variable selection aspect of the competing models. The C-mix model appears to be the best method as well considering the variable selection aspect, even under model misspecification. We notice a general behaviour of our method that we describe in the following, which is also shared by the CURE model only when the data is simulated according to itself, and which justifies the log scale for the gap to clearly distinguish the three following phases. For very small gap values (less than ), the confusion rate does not impact the variable selection performances, since adding very small gap values to the covariates is almost imperceptible. This means that the resulting AUC is the same when there is no confusion factors and when (that is when there are half active covariates and half confusion ones). For medium gap values (saying between and 1), the confusion factors are more difficult to identify by the model as there number goes up (that is when increases), which is precisely the confusion factors effect we expect to observe. Then, for large gap values (more than 1), the model succeeds in vanishing properly all confusion factors since the two subpopulations are more clearly separated regarding the covariates, and the problem becomes naturally easier as the gap increases.

Average (bold lines) and standard deviation (bands) for \text{AUC}(t) on 100 simulated data with n=100, d=30 and r_c=0.5. Rows correspond to the model simulated (cf. Section ) while columns correspond to different gap values (the problem becomes more difficult as the gap value decreases). Surprisingly, our method gives almost always the best results, even under model misspecification (see Cox PH and CURE simulation cases on the second and third rows).
Average (bold lines) and standard deviation (bands) for on 100 simulated data with , and . Rows correspond to the model simulated (cf. Section ) while columns correspond to different gap values (the problem becomes more difficult as the gap value decreases). Surprisingly, our method gives almost always the best results, even under model misspecification (see Cox PH and CURE simulation cases on the second and third rows).
Average AUC calculated according to Section  and obtained after 100 simulated data for each (gap, r_{cf}) configuration (a grid of 20x20 different configurations is considered). A gaussian interpolation is then performed to obtain smooth figures. Note that the gap values are log-scaled. Rows correspond to the model simulated while columns correspond to the model under consideration for the variable selection evaluation procedure. Our method gives the best results in terms of variable selection, even under model misspecification.
Average AUC calculated according to Section and obtained after 100 simulated data for each (gap, ) configuration (a grid of 20x20 different configurations is considered). A gaussian interpolation is then performed to obtain smooth figures. Note that the gap values are -scaled. Rows correspond to the model simulated while columns correspond to the model under consideration for the variable selection evaluation procedure. Our method gives the best results in terms of variable selection, even under model misspecification.

5Application to genetic data

In this section, we apply our method on three genetic datasets and compare its performance to the Cox PH and CURE models. We extracted normalized expression data and survival times in days from breast invasive carcinoma (BRCA, ), glioblastoma multiforme (GBM, ) and kidney renal clear cell carcinoma (KIRC, ).

These datasets are available on The Cancer Genome Atlas (TCGA) platform, which aims at accelerating the understanding of the molecular basis of cancer through the application of genomic technologies, including large-scale genome sequencing. For each patient, 20531 covariates corresponding to the normalized gene expressions are available. We randomly split all datasets into a training set and a test set (30% for testing, cross-validation is done on the training).

We compare the three models both in terms of C-index and AUC() on the test sets. Inference of the Cox PH model fails in very high dimension on the considered data with the glmnet package. We therefore make a first variable selection (screening) among the 20531 covariates. To do so, we compute the C-index obtained by univariate Cox PH models (not to confer advantage to our method), namely Cox PH models fitted on each covariate separately. We then ordered the obtained 20531 C-indexes by decreasing order and extracted the top , and covariates. We then apply the three methods on the obtained covariates.

The results in terms of AUC() curves are given in Figure ? for , where we distinguish the C-mix model with geometric or Weibull distributions.

Figure 1: BRCA
Figure 1: BRCA
Figure 2: GBM
Figure 2: GBM
Figure 3: KIRC
Figure 3: KIRC

Then it appears that the performances are very close in terms of AUC() between the C-mix model with geometric or Weibull distributions, which is also validated if we compare the corresponding C-index for these two parameterizations in Table ?.

C-index comparison between geometric or Weibull parameterizations for the C-mix model on the three TCGA data sets considered (with ). In all cases, results are very similar for the two distribution choices.
Parameterization Geometric Weibull
BRCA 0.782 0.780
Cancer GBM 0.755 0.754
KIRC 0.849 0.835

Similar conclusions in terms of C-index, AUC() and computing time can be made on all considered datasets and for any choice of . Hence, as already mentionned in Section 3.3, we only concentrate on the geometric parameterization for the C-mix model. The results in terms of C-index are then given in Table ?.

A more direct approach to compare performances between models, rather than only focus on the marker order aspect, is to predict the survival of patients in the test set within a specified short time. For the Cox PH model, the survival for patient in the test set is estimated by

where is the estimated survival function of baseline population () obtained using the Breslow estimate of [9]. For the CURE or the C-mix models, it is naturally estimated by

where and are the Kaplan-Meier estimators [20] of the low and high risk subgroups respectively, learned by the C-mix or CURE models (patients with are clustered in the high risk subgroup, others in the low risk one). The corresponding estimated survival curves are given in Figure ?.

Figure 4: BRCA
Figure 4: BRCA
Figure 5: GBM
Figure 5: GBM
Figure 6: KIRC
Figure 6: KIRC

We observe that the subgroups obtained by the C-mix are more clearly separated in terms of survival than those obtained by the CURE model.

For a given time , one can now use for each model to predict whether or not on the test set, resulting on a binary classification problem that we assess using the classical AUC score. By moving within the first years of follow-up, since it is the more interesting for physicians in practice, one obtains the curves given in Figure ?.

Figure 7: BRCA
Figure 7: BRCA
Figure 8: GBM
Figure 8: GBM
Figure 9: KIRC
Figure 9: KIRC

Let us now focus on the runtime comparison between the models in Table ?. We choose the BRCA dataset to illustrate this point, since it is the larger one () and consequently provides more clearer time-consuming differences.

Computing time comparison in second on the BRCA dataset (), with corresponding C-index in parenthesis and best result in bold in each case. This times concern the learning task for each model with the best hyper parameter selected after the cross validation procedure. It turns out that our method is by far the fastest in addition to providing the best performances. In particular, the QNEM algorithm is faster than the R implementation glmnet.
Model C-mix CURE Cox PH
100 0.025 (0.792) 1.992 (0.764) 0.446 (0.705)
300 0.027 (0.782) 2.343 (0.753) 0.810 (0.723)
1000 0.139 (0.817) 12.067 (0.613) 2.145 (0.577)

We also notice that despite using the same QNEM algorithm steps, our CURE model implementation is slower since convergence takes more time to be reached, as shows Figure 10.

Figure 10: Convergence comparison between C-mix and CURE models through the QNEM algorithm. The relative objective is here defined at iteration l as \big( \ell_n^{\text{pen}}(\theta^{(l)}) - \ell_n^{\text{pen}}(\hat\theta)\big) / \ell_n^{\text{pen}}(\hat\theta), where \hat\theta is naturally the parameter vector returned at the end of the QNEM algorithm, that is once convergence is reached. Note that both iteration and relative objective axis are log-scaled for clarity. We observe that convergence for the C-mix model is dramaticaly faster than the CURE one.
Figure 10: Convergence comparison between C-mix and CURE models through the QNEM algorithm. The relative objective is here defined at iteration as , where is naturally the parameter vector returned at the end of the QNEM algorithm, that is once convergence is reached. Note that both iteration and relative objective axis are -scaled for clarity. We observe that convergence for the C-mix model is dramaticaly faster than the CURE one.

In Section ? of Appendices, the top 20 selected genes for each cancer type and for all models are presented (for ). Literature on those genes is mined to estimate two simple scores that provide information about how related they are to cancer in general first, and second to cancer plus the survival aspect, according to scientific publications. It turns out that some genes have been widely studied in the literature (e.g. FLT3 for the GBM cancer), while for others, very few publications were retrieved (e.g. TRMT2B still for the GBM cancer).

6Concluding remarks

In this paper, a mixture model for censored durations (C-mix) has been introduced, and a new efficient estimation algorithm (QNEM) has been derived, that considers a penalization of the likelihood in order to perform covariate selection and to prevent overfitting. A strong improvement is provided over the CURE and Cox PH approches (both penalized by the Elastic-Net), which are, by far, the most widely used for biomedical data analysis. But more importantly, our method detects relevant subgroups of patients regarding their risk in a supervised learning procedure, and takes advantage of this identification to improve survival prediction over more standard methods. An extensive Monte Carlo simulation study has been carried out to evaluate the performance of the developed estimation procedure. It showed that our approach is robust to model misspecification. The proposed methodology has then been applied on three high dimensional datasets. On these datasets, C-mix outperforms both Cox PH and CURE, in terms of AUC(), C-index or survival prediction. Moreover, many gene expressions pinpointed by the feature selection aspect of our regularized method are relevant for medical interpretations (e.g. NFKBIA, LEF1, SUSD3 or FAIM3 for the BRCA cancer, see [38] or [26]), whilst others must involve further investigations in the genetic research community. Finally, our analysis provides, as a by-product, a new robust implementation of CURE models in high dimension.

Software

All the methodology discussed in this paper is implemented in Python. The code is available from https://github.com/SimonBussy/C-mix in the form of annotated programs, together with a notebook tutorial.

Acknowledgements

The results shown in this paper are based upon data generated by the TCGA Research Network and freely available from http://cancergenome.nih.gov/. Conflict of Interest: None declared.


Appendices

ANumerical details

Let us first give some details about the starting point of Algorithm 1. For all , we simply use as the zero vector, and for we fit a censored parametric mixture model on with an EM algorithm.

Concerning the V-fold cross validation procedure for tuning , we use and the cross-validation metric is the C-index. Let us precise that we choose as the largest value such that error is within one standard error of the minimum, and that a grid-search is made during the cross-validation on an interval , with the interval upper bound computed in the following.

Let us consider the following convex minimization problem resulting from Equation (8), at a given step :

Regarding the grid of candidate values for , we consider . At , all coefficients for are exactly zero. The KKT conditions [8] claim that

,

where is the active set of the estimator, and for all . Then, using (10), one obtains

Hence, we choose the following upper bound for the grid search interval during the cross-validation procedure

BProof of Theorem 1

Let us denote the number of coordinates of so that one can write

We denote a cluster point of the sequence generated by the QNEM algorithm, , with the epsilon-neighbourhood of . We want to prove that is a stationary point of the non-differentiable function , which means [33]:

The proof is inspired by [5]. The conditional density of the complete data given the observed data can be written

Then, one has

where we introduced . The key argument relies on the following facts that hold under Hypothesis (3) and (4):

  • is continuous in and ,

  • for any fixed (at the -th M step of the algorithm), is convex in and strictly convex in each coordinate of .

Let be an arbitrary direction, then Equations and yield

Hence, by Jensen’s inequality we get

and so is minimized for , then we have . It remains to prove that . Let us focus on the proof of the following expression

Denoting and from the definition of the QNEM algorithm, we first have

and second for all . Consequently, if converges to , one obtains by continuity taking the limit . Let us now suppose that does not converge to , so that does not converge to 0. Or equivalently: there exists a subsequence not converging to 0.

Then, denoting , we may assume that there exists such that by removing from the subsequence any terms for which . Let , so that belongs to a compact set and then converges to . Let us fix some , then . Moreover, lies on the segment joining and , and consequently belongs to since is convex. As is convex and minimizes this function over all values that differ from along the first coordinate, one has

We finally obtain