Subgroup analysis of treatment effects for misclassified biomarkers with time-to-event data

Subgroup analysis of treatment effects for misclassified biomarkers with time-to-event data

Fang Wan, Andrew C. Titman, Thomas F. Jaki
Abstract

Analysing subgroups defined by biomarkers is of increasing importance in clinical research. In many situations the biomarker is subject to misclassification error, meaning the subgroups are identified with imperfect sensitivity and specificity. In these cases, it is improper to assume the Cox proportional hazards model for the subgroup specific treatment effects for time-to-event data with respect to the true subgroups, since the survival distributions with respect to the diagnosed subgroups will not adhere to the proportional hazards assumption. This precludes the possibility of using simple adjustment procedures. Instead, we present a method based on formally modelling the data as a mixture of Cox models using an EM algorithm for estimation. An estimate of the overall population treatment effect is obtained through the interpretation of the hazard ratio as a concordance odds. Profile likelihood is used to construct individual and simultaneous confidence intervals of treatment effects. The resulting confidence intervals are shown to have close to nominal coverage for moderately large sample sizes in simulations and the method is illustrated on data from a renal-cell cancer trial.

1 Introduction

There is increasing acknowledgement of the existence of patient subgroups within clinical research. While some treatments work well for all patients with the same disease, it has been shown that some treatments are only effective for some subgroups of patients defined by a certain predictive biomarker [1, 2, 3, 4]. As a consequence, many clinical trials look to perform subgroup analysis to assess whether a treatment is beneficial for those patients that are biomarker positive or biomarker negative and many trial designs have been developed to account for these subgroups. Enrichment designs [5, 6] seek to identify the most promising (sub)group of patients during the study while other designs optimize the cost-efficiency of the trials via patients allocation with respect to their biomarker status (subgroup membership) [7, 8] or use a biomarker-strategy design [9]. All of these clinical trials assume accuracy of the biomarker used in defining subgroups. However, it is seldom possible to measure a biomarker with perfect diagnostic accuracy meaning the observed subgroups will be subject to misclassification error. Without taking the sensitivity and specificity of the biomarker into consideration, the resulting conclusion may be inaccurate [10, 11].

Existing methods that account for the sensitivity and specificity (see [10, 11]) consider normal and binary endpoints only, while time-to-event data has not yet been considered.

In this paper, we propose a method to obtain point estimates and confidence intervals of the treatment effects in biomarker stratified subgroups with time-to-event data for a biomarker by treatment interaction design [12] (see Figure 1): Assume the total number of patients available to be enrolled into the trial is fixed to be . Patients are classified into two subgroups according to the observed status (positive or negative) of a specific biomarker. In each of the two subgroups, patients are randomized into either the treatment or control arm and are administered experimental treatment or placebo/active control accordingly. The primary outcome, which is the survival time subject to right censoring, of all patients enrolled are recorded for analysis.


Figure 1: A biomarker stratified design

The remainder of the article is organized as follows. In Section 2, the statistical model for misclassified biomarker subgroups is defined. Section 3 gives estimation procedures for the model parameters, measures of overall efficacy and construction of confidence intervals. Section 4 presents a simulation results to assess the performance of the estimator and confidence intervals. The method is illustrated on a data example relating to metastatic renal-cell cancer in Section 5. The article concludes with a discussion.

2 Statistical Model

Conditional on the true biomarker status, a proportional hazards model is assumed to hold. Specifically the hazard at time for patient is taken as

(2.1)

where is an unspecified baseline hazard function, and are binary indicators of treatment and true biomarker status, respectively. Note that the biomarker status is 0 for the true negative subgroup and 1 for the true positive subgroup. Further note that this model assumes that the biomarker status to be measured without error in this model. Under this model, the hazard ratios associated with the treatment are and for patients in the biomarker negative and positive group, respectively.

When the true biomarker status cannot be observed, a diagnostic test with imperfect sensitivity and specificity has to be used. Let be a binary indicator of whether the th patient tests positive for the biomarker. The marginal distribution of survival times among patients in each diagnosis group will then be a mixture of Cox models corresponding to the models under true biomarker positive or negative status and with the mixing proportions determined by the positive-predictive value (PPV) and negative-predictive value (NPV) of the diagnostic test.

The PPV is given by

(2.2)

and NPV by

(2.3)

where the sensitivity, , and the specificity, are assumed to be known and the prevalence of the biomarker, , may either be considered known or will be estimated from the data.

The survivor function for patients observed to be positive and negative are then

(2.4)

and

(2.5)

respectively, where and is the baseline cumulative hazard.

Note that unless there is either no treatment effect or the biomarker is observed without misclassification, proportional hazards will not hold with respect to the treatment , for either or . Therefore it is not possible to fit a Cox model to the observed data and perform some simple correction to adjust for misclassification error. Instead, a formal likelihood-based estimation procedure is used.

3 Estimation

Recently, Wu et al [13] proposed a Logistic-Cox mixture model in order to test for the existence of subgroups. In their model a logistic regression model determines the effect of observable covariates on the probability of membership of a mixture component. In each mixture component, times-to-event follow a Cox model with different covariate effects.

The problem of correcting for misspecified biomarker status can be considered a special case of the framework of Wu et al, where the mixing probabilities have a specific form that is fully specified given the prevalence, , and the sensitivity and specificity.

In order to estimate the model proposed in Section 2, a semi-parametric maximum likelihood approach is employed. A full likelihood for the data is constructed by making the standard assumption that the hazard function is piecewise constant between observed event times [14].

3.1 EM algorithm

Direct maximization of the likelihood is difficult or infeasible due to the large number of nuisance parameters associated with the increments of the baseline hazard. Instead, taking a similar approach to various previous authors [15, 16], an Expectation-Maximization (EM) algorithm is used. The true biomarker status is treated as missing data, such that the ‘M’-step of the algorithm involves fitting a weighted Cox model, where each patient has two sets of data corresponding to being truly biomarker positive or biomarker negative. The weights correspond to the conditional probability of being truly biomarker positive (or negative) given the current estimates of the parameters and the observed data (follow-up time, event indicator and diagnostic test result).

Let , for , denote the follow up time for patient and correspond to an event indicator, the likelihood contribution of the th observed data given positive and negative subgroup status is

and

where denotes the cumulative baseline hazard.

The conditional weights are then given by

where . Note that the weights depend on both and . At each iteration, the M-step first updates the estimates of and then updates and using the Breslow’s estimator for the baseline hazard from the weighted Cox model [14]. Let denote the th ordered uncensored event time and . Then

where

and denotes the risk set of patients at time .

These new estimates of and are subsequently used to update the weights. When the prevalence parameter is treated as unknown, it is also updated at each iteration, with the updated value of given by . This involves also updating the values of and by plugging the new estimate of into (2.2) and (2.3).

The marginal, or observed, likelihood for the data is given by

If the prevalence of disease, , is known then the above likelihood can be expressed as

(3.1)

which corresponds to the likelihood conditional on the observed diagnostic test results, . In the case where is treated as unknown, we have

(3.2)

with the difference between (3.1) and (3.2) arising because of the necessity to include terms relating to the probabilities of observed values of in the latter case.

3.2 Measures of overall efficacy

The use of hazard ratios for subgroup analysis of time-to-event data has been criticized due to the absence of a constant hazard ratio in a mixture population and as a consequence, other methods based on median survival and parametric modelling have been proposed to obtain ‘subgroup mixable’ estimates [18].

However, the hazard ratio between two groups in a proportional hazards model can also be expressed as the concordance odds [19]. Specifically, if and are the survival times of two randomly chosen individuals from groups 0 and 1 and the hazard ratio of group 1 compared to group 0 is , then , or equivalently

(3.3)

The concordance odds has a clear clinical meaning and has the advantage that an estimate of the overall concordance odds in a subgroup model can be found as a function of just the individual subgroup concordance odds and the prevalence. It also has the advantages of not requiring either fully parametric estimation procedures, which may be less robust, or fully non-parametric procedures which will be less efficient.

Let represent the survival time of a random subject in treatment arm and represent the subgroup membership with , then

hence

where is the survival time for a subject in treatment arm and subgroup . Each of the probabilities on the right-hand side of the equation can be expressed in terms of the parameters of the model in (2.1). Following (3.3), we have

(3.4)

where is the inverse logit function. An estimate of the overall effect of a treatment, expressed as concordance odds, is then given by where is obtained by plugging the estimates of and into (3.4).

A disadvantage of the concordance odds as a measure of treatment efficacy is that there is no guarantee that the overall efficacy measure lies in the interval between the two subgroup efficacy values. In fact, when but the overall concordance odds will be closer to 1 than . Nevertheless it is virtually impossible, in practice, for a contradictory result to occur, i.e. for there to be statistically significant benefits for both subgroups but a non-significant overall effect. Such a situation could only occur if is large and is large compared to both and . Moreover it is impossible for the overall effect to be of a different sign to the two subgroup effects.

3.3 Construction of confidence intervals

A convenient approach to constructing asymptotic confidence intervals for individual parameters is based upon the profile likelihood ratio, which continues to have standard asymptotics even in the presence of a potentially infinitely dimensional nuisance parameter [20]. For instance, to obtain a confidence interval for the interaction, , we use the fact that

and hence take

as a confidence interval for . It is straightforward to find the maximum profile likelihood estimates by using a modified EM algorithm where at each M-step the fixed parameter, e.g. , is treated as a fixed offset term in the weighted Cox model.

For the subgroup analysis it is also desirable to construct a simultaneous confidence interval for the estimated treatment effect in the biomarker positive and negative groups in order to control the familywise type I error rate. In the parametrization used in (2.1) this corresponds to simultaneous confidence intervals for and . In this case, the method proceeds by obtaining an estimate of the Hessian of the observed profile likelihood with respect to . The method of Murphy and van der Vaart [21] is used to approximate the profile likelihood information. This approach has also been used in other contexts where estimation requires an EM algorithm [22]. The profile likelihood information is approximated by computing the profile likelihood at values about , perturbed by a suitably small value to provide a ‘finite-differences’ type approximation. Specifically,

and

The value of is primarily chosen to ensure that numerical stability in the converged values of the EM algorithm do not affect the estimate. Theoretically, the value of should decrease with increasing sample size, but taking worked adequately in the examples considered in this paper and the results were not particularly sensitive to the choice of .

By inverting the estimated information matrix , an estimate of the variance-covariance matrix of ,

is given by using the delta method

Simultaneous confidence intervals are then constructed of the form

and

where is the scaling factor chosen such that, for a bivariate normal random variable, , with unit variances and correlation , . This value can be found straightforwardly using the qmvnorm function in the mvtnorm package in R [23, 24]. The procedure can be extended to provide simultaneous confidence intervals for the two subgroup effects and the overall log concordance odds by computing the variance-covariance matrix of using the delta method, where . Obtaining an analytical form for the first derivatives can be cumbersome, but a numerical approximation for the first derivatives can be used instead.

3.4 Missing biomarker status

In some trials, only a subset of patients may have had their biomarker status measured. If it can be assumed that the missing diagnostic tests of biomarker status are missing at random, then the survivor function for such patients, , is given by:

Such patients can easily be accommodated within the EM algorithm proposed in Section 3.1 by a simple modification of the conditional weights for such patients. Specifically, the weight for a patient with missing diagnostic test is taken as

Similarly, the marginal likelihood contribution of these patients, regardless of whether is taken as known or to be estimated is simply given by

4 Simulations

To investigate the finite sample properties of the proposed estimator data sets of varying sizes and levels of biomarker subgroup diagnostic accuracy are simulated. The underlying survival hazards are assumed to follow the model in (2.1), with a decreasing Weibull baseline hazard assumed such that .

Three scenarios are considered for the treatment effects. In the first, and , meaning the treatment is beneficial for both biomarker groups, but the effect is smaller for those who are biomarker positive, corresponding to hazard ratios (HR) of 0.61 and 0.82. In the second, and corresponding to a stronger interaction effect where the treatment is beneficial for the biomarker positive group but slightly harmful for the negative group (HRs of 0.55 and 1.11). Finally the third scenario, and , corresponds to a situation where the treatment has no effect in either biomarker group.

Censoring is assumed to be independent and uniform distributed between 5 and 25, , which results in an overall censoring rate of around 25%. The prevalence of a true positive biomarker status, , is taken to be 0.3 and treated as unknown in the estimation procedure. The effect of assuming rather than estimating the prevalence is negligible in the simulation (where the prevalence given is accurate). However, if the prevalence given is far from its true value, there should be an impact. The sensitivity and specificity of the diagnostic test and the sample size per randomization group is varied across the simulation scenarios. The results from 5000 replications of each scenario are presented in Tables 1, 2 and 3. The parameter estimates have reasonably low levels of bias for all scenarios considered. As would be expected, the standard deviation of the estimates increases as the diagnostic accuracy decreases. In the scenarios considered, since the prevalence is lower than 0.5, imperfect specificity has a greater impact than imperfect sensitivity. The standard deviation of the estimate of is around 75% higher when the sensitivity and specificity are both 0.8 compared to the case of perfect diagnostic accuracy. In the first scenario where the interaction effect is relatively modest, the power to detect the interaction term is low in all scenarios, but substantially lower when there is diagnostic error. For instance when the number in each randomization group is 500, the power reduces from 0.43, with perfect diagnostic accuracy, to 0.17 with sensitivity and specificity both 0.8. A similar pattern is observed in the second scenario, where the interaction effect is stronger. In the scenario with no interaction effect the empirical Type I error of a test of interaction is close to 5% for all configurations, with a slight tendency to be anti-conservative in the smaller sample size and when misclassification rates are higher.

Bias SD Coverage Power
(Sens, Spec) Simult
100 (1,1) 0.0191 0.7232 -0.6650 0.2153 0.2634 0.3847 0.9472 0.1226
100 (1,0.8) -0.4598 0.9980 -2.0488 0.2416 0.3514 0.5154 0.9614 0.0902
100 (0.8,1) -0.6302 1.1681 -1.1618 0.2299 0.3017 0.4473 0.9512 0.1128
100 (0.9,0.9) -0.6628 1.1261 -1.2436 0.2383 0.3366 0.4960 0.9622 0.0912
100 (0.8,0.8) -0.4748 0.0643 -1.7780 0.2685 0.4767 0.6887 0.9596 0.0814
500 (1,1) 0.1274 0.2206 -0.2761 0.0965 0.1157 0.1670 0.9506 0.4286
500 (1,0.8) -0.1734 -0.1977 -0.0315 0.1077 0.1583 0.2280 0.9546 0.2490
500 (0.8,1) -0.3087 -0.1782 0.4508 0.1012 0.1333 0.1948 0.9522 0.3514
500 (0.9,0.9) -0.2142 -0.3322 0.1361 0.1052 0.1490 0.2172 0.9520 0.2834
500 (0.8,0.8) -0.1233 -0.4452 -0.3492 0.1218 0.2051 0.2949 0.9578 0.1678
Table 1: Bias and Standard Deviation (SD) of parameter estimates, empirical coverage of simultaneous (Simult) nominal 95% confidence intervals of and the empirical power of likelihood ratio test of interaction term in the mild interaction scenario .
Bias SD Coverage Power
(Sens, Spec) Simult
100 (1,1) 0.1509 0.1085 -0.5872 0.2034 0.2581 0.3966 0.9468 0.4562
100 (1,0.8) -0.6065 -0.5878 1.0292 0.2210 0.3430 0.5144 0.9570 0.2968
100 (0.8,1) 0.0270 1.2441 -1.6333 0.2132 0.3004 0.4587 0.9516 0.3682
100 (0.9,0.9) -0.4779 -0.8028 -0.1945 0.2241 0.3650 0.5453 0.9578 0.2812
100 (0.8,0.8) -1.5560 0.6694 -1.3342 0.253 0.4714 0.8217 0.9502 0.2004
500 (1,1) 0.1167 0.1671 -0.5071 0.0900 0.1148 0.1705 0.9522 0.9878
500 (1,0.8) 0.1796 0.3993 -0.3036 0.0967 0.1480 0.2173 0.9566 0.9042
500 (0.8,1) 0.1194 0.1749 -0.1344 0.0926 0.1306 0.1975 0.9536 0.9452
500 (0.9,0.9) 0.1950 0.4669 -0.4286 0.0981 0.1563 0.2282 0.9556 0.8772
500 (0.8,0.8) -0.0108 0.6640 -0.0728 0.1126 0.2010 0.2959 0.9600 0.6752
Table 2: Bias and Standard Deviation (SD) of parameter estimates, empirical coverage of simultaneous (Simult) nominal 95% confidence intervals of and the empirical power of likelihood ratio test of interaction term in the strong interaction scenario .
Bias SD Coverage Type I err
(Sens, Spec) Simult
100 (1,1) 0.1389 0.0680 0.5389 0.2012 0.2629 0.3678 0.9508 0.0496
100 (1,0.8) -0.1232 0.5505 0.1944 0.222 0.3356 0.4798 0.9580 0.0588
100 (0.8,1) -0.0146 0.2379 1.2609 0.2155 0.3109 0.4396 0.9492 0.0620
100 (0.9,0.9) -0.0436 -0.8576 1.3769 0.2257 0.3644 0.5110 0.9590 0.0588
100 (0.8,0.8) -0.3072 0.6792 0.1749 0.2541 0.4698 0.7010 0.9608 0.0662
500 (1,1) 0.1800 0.4851 -0.4689 0.0899 0.1171 0.1657 0.9436 0.0572
500 (1,0.8) 0.2906 0.3950 -0.4087 0.0971 0.1480 0.2067 0.9528 0.0482
500 (0.8,1) 0.1894 0.1761 -0.0657 0.0928 0.1307 0.1848 0.9526 0.0508
500 (0.9,0.9) 0.3246 0.4625 -0.5343 0.0994 0.1563 0.2196 0.9556 0.0472
500 (0.8,0.8) 0.3481 0.6671 -0.630 0.1128 0.2011 0.2832 0.9622 0.0476
Table 3: Bias and Standard Deviation (SD) of parameter estimates, empirical coverage of simultaneous (Simult) nominal 95% confidence intervals of and the empirical Type I error of likelihood ratio test of interaction term in the null scenario .

The simultaneous confidence intervals for have close to the nominal 95% level in all cases, with a tendency to be slightly conservative.

5 Example: Pazonpanib for renal-cell cancer

As an illustrative example of the impact of accounting for misclassification of biomarkers in a survival study, data from a Phase III trial of patients with metastatic renal-cell cancer are analyzed. The trial involved 343 patients, 225 of whom were randomized to treatment with Pazopanib, with the remaining 118 on placebo. In addition, patients were classified by level of interleukin 6 (IL-6) into ‘low’ or ‘high’ groups. Interest lies in determining whether Pazonpanib is an effective treatment for either or both groups of patient. In the original analysis by Tran et al [25], it was assumed that the assay used to determine the level of IL-6 had 100% diagnostic sensitivity and specificity.

Here, the data are re-analysed considering the possibility of misclassification of IL-6 status. The individual level data were reconstructed from the Kaplan-Meier estimates provided in Tran et al [25] using the method of Guyot et al (2012) [26]. Following Liu et al [10], it is assumed that the assay has 95% sensitivity and 90% specificity to distinguish high IL-6 from low.

Table 4 compares the results of an analysis assuming no misclassification with estimates using the proposed method. It is seen that the effect of adjusting for misclassification is to increase the estimated interaction effect from -0.53 to -0.72, which also leads to the interaction being considered significant ().

Original analysis Misclassification corrected
Parameter Estimate 95% CI -value Estimate 95% CI -value
Pazonpanib () -0.15 (-0.58, 0.27) 0.48 -0.12 (-0.57, 0.33) 0.58
High IL-6 () 1.18 (0.73, 1.62) 1.50 (0.96, 2.10)
Interaction () -0.53 (-1.08, 0.03) 0.06 -0.72 (-1.40, -0.05) 0.04
Prevalence () - - - 0.47 (0.41, 0.53) -
Table 4: Comparison of estimates from original Cox model analysis assuming no biomarker misclassification and model assuming 95% sensitivity and 90% specificity to detect High IL-6

Table 5 gives the estimates and simultaneous 95% confidence intervals for the concordance odds of Pazonpanib for Low and High IL-6 patients. For both the original and misclassification analyses, the confidence interval for Low IL-6 includes 1, implying no treatment effect, whilst the confidence interval for High IL-6 is entirely below 1, indicating a treatment benefit.

Original analysis Misclassification corrected
Group CO 95% CI CO 95% CI
Low IL-6 0.86 (0.52, 1.41) 0.88 (0.52, 1.50)
High IL-6 0.51 (0.33, 0.77) 0.43 (0.25, 0.75)
All 0.70 (0.51 , 0.95) 0.67 (0.50 , 0.92)
Table 5: Simultaneous 95% confidence intervals for effect of Pazonpanib on overall survival for Low IL-6 patients, High IL-6 patients and all patients; CO=Concordance odds

6 Discussion

In this paper, we investigate subgroup analysis for time-to-event responses in biomarker stratified subgroups with misclassificated biomarkers using a proportional hazards model. Point estimation and the construction of (simultaneous) confidence intervals for the treatment effects in biomarker subgroups in the form of the log-hazard ratio are provided. It is shown by simulation that the bias of the estimators and the coverage probabilities of the simultaneous confidence intervals are acceptable for all considered simulation scenarios.

It is also apparent from the simulation results that the power to detect a subgroup effect of treatment is diminished in the presence of misclassification. Further work would be to develop sample size formulas which would allow survival trials to be adequately powered to perform subgroup analysis in the presence of biomarker misclassification.

The interpretation of a hazard ratio as the concordance odds allows an overall treatment effect estimate to be computed in subgroup analyses of time-to-event data. While the focus of this paper has been cases with misclassification of the biomarker status, the use of concordance odds can also be applied in the simpler case where the biomarker status is perfectly observed.

7 Acknowledgement

This work is an independent research arising in part from Prof Jaki’s Senior Research Fellowship (NIHR-SRF-2015-08-001) supported by the National Institute for Health Research. Funding for this work was also provided by the Medical Research Council (MR/M005755/1). The views expressed in this publication are those of the authors and not necessarily those of the NHS, the National Institute for Health Research, or the Department of Health.

References

  • [1] Sacks FM, Pfeffer MA, Moye LA, Rouleau JL, Rutherford JD, Cole TG, Braunwald E. The effect of Pravastatin on coronary events after myocardial infarction in patients with average cholesterol levels. New England Journal of Medicine. 1996. 335:1001-1009. DOI:10.1056/NEJM199610033351401.
  • [2] Jackson RD, LaCroix AZ, Gass M, Wallace RB, Robbins J, Lewis CE, Barad D. Calcium plus vitamin D supplementation and the risk of fractures. New England Journal of Medicine. 2006. 354:669-683. DOI:10.1056/NEJMoa055218.
  • [3] Jimeno A, Messersmith WA, Hirsch FR, Franklin WA, Eckhardt SG. KRAS mutatioons and sensitivity to epidermal growth factor receptor inhibitors in colorectal cancer: practical application of patient selection. Journal of Clinical Oncology.2009.27(7): 1130-1136. DOI:
  • [4] Peeters M, Douillard JY, Van Cutsem E, Siena S, Zhang K, Williams R, Wiezorek J. Mutant KRAS codon 12 and 13 alleles in patients with metastatic colorectal cancer: assessment as prognostic and predictive biomarkers of response to panitumumab. Journal of Clinical Oncology.2013.31(6): 759-765. DOI:
  • [5] Freidlin B, Korn EL. Biomarker enrichment strategies: matching trial design to biomarker credentials.Nat Rev Clin Oncol. 2014. 11(2):81-90. DOI: 10.1038/nrclinonc.2013.218.
  • [6] Magnusson BP, Turnbull BW. Group sequential enrichment design incorporating subgroup selection. Statistics in Medicine. 2013. 32(16): 2695–2714. DOI: 10.1002/sim.5738.
  • [7] Wason JMS, Abraham JE, Baird RD, Gournaris I, Vallier AL, Brenton JD, Earl HM, Mander AP. A Bayesian adaptive design for biomarker trials with linked treatments. British Journal of Cancer. 2015. 113, 699-705. DOI: 10.1038/bjc.2015.278.
  • [8] Mehta C, Schafer H, Daniel H, Irle S. Biomarker driven population enrichment for adaptive oncology trials with time to event endpoints. Statistics in Medicine. 2014. 33(26): 4515–4531. DOI: 10.1002/sim.6272.
  • [9] Kunz C, Jaki T, Stallard N. An alternative method to analyse the biomarker-strategy design. Submitted.
  • [10] Liu, C., Liu, A., Hu, J., Yuan, V. and Halabi, S. Adjusting for misclassification in a stratified biomarker clinical trial. Statistics in Medicine 2014. 33: 3100-3113.
  • [11] Wan F, Kunz C, Jaki T. Confidence regions for treatment effects in subgroups in biomarker stratified designs. Submitted.
  • [12] Gosho M, Nagashima K, Sato Y. Study designs and statistical analyses for biomarker research. Sensors. 2012. 12: 8966-8986. DOI: 10.3390/s120708966.
  • [13] Wu RF, Zheng M, Yu W. Subgroup Analysis with Time-to-Event Data Under a Logistic-Cox Mixture Model. Scandinavian Journal of Statistics. 2016. DOI: 10.1111/sjos12213.
  • [14] Breslow NE, Discussion of the paper by D.R. Cox. Journal of the Royal Statistical Society; Series B. 1972. 34: 216-217.
  • [15] Zhong M, Sen PK, Cai J. (1996). Cox regression model with mismeasured covariates or missingcovariate data. ASA Biometrics Section Proceedings. 323-328.
  • [16] Martinussen T. Cox regression with Incomplete Covariare Measurements using the EM- algorithm. Scandinavian Journal of Statistics. 1999. 26: 479-491.
  • [17] Bilmes JA. A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models. International Computer Science Institute; Technical Report. 1998. TR-97-021.
  • [18] Ding Y, Lin H-M, Hsu JC. Subgroup mixable inference on treatment efficacy in mixture populations, with an application to time-to-event outcomes. Statistics in Medicine. 2015. 35: 1580-1594. DOI: 10.1002/sim.6822.
  • [19] Schemper M, Wakounig S, Heinze G. The estimation of average hazard ratios by weighted Cox regression. Statistics in Medicine. 2009; 28: 2473–2489. DOI: 10.1002/sim.3623.
  • [20] Murphy SA, van der Vaart AW. Semiparametric likelihood ratio inference. Annals of Statistics.1997. 25: 1471-1509.
  • [21] Murphy SA, van der Vaart AW. Observed information in semiparametric models. Bernoulli. 1999. 5: 381-412.
  • [22] Xu C, Baines PD, Wang, JL. Standard error estimation using the EM algorithm for the joint modeling of survival and longitudinal data. Biostatistics. 2014. 15: 731-744.
  • [23] Genz A, Bretz F, Miwa T, Mi X, Leisch F, Scheipl F, Hothorn T. mvtnorm: Multivariate Normal and t Distributions. R package version 0.9-9997. 2014 http://CRAN.R-project.org/package=mvtnorm
  • [24] Genz A, Bretz F. Computation of Multivariate Normal and t Probabilities. Lecture Notes in Statistics. 2009. 195. Springer-Verlage, Heidelberg.
  • [25] Tran HT, Liu Y, Zurita AJ, Lin Y, Baker-Neblett KL, Martin AM, Figlin RA, Hutson TE, Sternberg CN, Amado RG, Pandite LN. Prognostic or predictive plasma cytokines and angiogenic factors for patients treated with pazopanib for metastatic renal-cell cancer: a retrospective analysis of phase 2 and phase 3 trials. The Lancet Oncology 2012. 13: 827-837.
  • [26] Guyot P, Ades A, Ouwens MJ, Welton NJ. Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves. BMC Medical Research Methodology. 2012. 12:9.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
84168
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description