Valid and Approximately Valid Confidence Intervals for Current Status Data
SUPPLEMENTARY MATERIALS (Valid and Approximately Valid Confidence Intervals for Current Status Data)
Abstract
We introduce a new framework for creating pointwise confidence intervals for the distribution of event times for current status data. Existing methods are based on asymptotics. Our framework is based on binomial properties and motivates confidence intervals that are very simple to apply and are valid, i.e., guarantee nominal coverage. Although these confidence intervals are necessarily conservative for small sample sizes, asymptotically their coverage rate approaches the nominal one. This binomial framework also motivates approximately valid confidence intervals, and simulations show that these approximate intervals generally have coverage rates closer to the nominal level with shorter length than existing intervals, including the likelihood ratiobased confidence interval. Unlike previous asymptotic methods that require different asymptotic distributions for continuous or gridbased assessment, the binomial framework can be applied to either type of assessment distribution.
journalname \startlocaldefs \endlocaldefs
Valid and Approx. C.I.s for Current Status Data
, and
t2Supported by National Institute of Allergy and Infectious Diseases
class=MSC] \kwd[Primary ]62G20 \kwd[; secondary ]62N01
Guaranteed coverage \kwdClopper and Pearson interval \kwdmid Pvalue \kwdBinomial properties \kwdAsymptotic coverage \kwdNonparametric maximum likelihood estimation (NPMLE) \kwdSmoothed maximum likelihood estimation (SMLE)
1 INTRODUCTION
This paper is concerned with finding pointwise confidence intervals on the event time distribution, F, for current status data. In current status data, the event time is not observed, but we only know one assessment time for each individual and whether the event for that individual has occurred by that time or not. This type of data appears in many animal studies, crosssectional studies and quantal bioassay studies. For example, consider a lung cancer study in mice. In order to determine if the cancer has developed, the mice must be sacrificed. So with each mouse, we only know if the cancer event has occurred by the time of sacrifice or not. Another example is a crosssectional study of women to determine the distribution for age at onset of menopause. At her age at the survey time, each woman will have either reached menopause or not. A third example concerns quantal bioassay studies where we can assume a monotonic dose response and we expose n animals to respective doses d_{1},d_{2},\ldots,d_{n}, and see if they live or die at that dose. Here dose acts as the “time” variable. Throughout the paper we assume that the assessment time for each individual is independent of the event. So for example, this assumption would be violated in the first example if we partially based the time of mouse sacrifice on the apparent health of the mouse.
Many papers (see below) have developed pointwise confidence intervals (CI) for F, but as far as we are aware, no one has studied valid CIs, ones that guarantee nominal coverage. In this article, we introduce new pointwise CIs (valid CIs and approximately valid CIs) for F and study their asymptotic properties. The valid confidence interval has coverage rates greater than or equal to the nominal rate, and its coverage rates asymptotically approach the nominal rate if certain conditions are satisfied. The approximate confidence interval does not guarantee the nominal rate, but its coverage rate will generally be closer to the nominal rate.
The nonparametric maximum likelihood estimate (NPMLE) of F, say \hat{F}_{n}, is relatively straightforward to calculate. [10] show this and also introduce the limiting distribution of \hat{F}_{n}F in the current status model, when G, the distribution of the observed assessment times, is continuous. When the assessments are independent of events, the limiting distribution at a fixed time point t is
n^{1/3}\left\{\hat{F}_{n}(t)F(t)\right\}\overset{d}{\to}\left[\frac{4f(t)F(t)% \{1F(t)\}}{g(t)}\right]^{1/3}\mathbb{Z}\equiv\mathscr{C}\mathbb{Z}  (1.1) 
where f\geq 0 is the derivative of F; g\geq 0 is the derivative of G; \mathbb{Z}\equiv\text{argmin}(W(t)+t^{2}) and W is twosided Brownian motion starting from 0. If \mathscr{C} were known then a 100(1\alpha)% Waldbased CI for F(t) would be given by
\left[\hat{F}_{n}(t)n^{1/3}\mathscr{C}\mathbb{Z}_{(1\alpha/2)},\hat{F}_{n}(% t)+n^{1/3}\mathscr{C}\mathbb{Z}_{(1\alpha/2)}\right]  (1.2) 
where \mathbb{Z}_{(1\alpha/2)} is the 100(1\alpha/2)th quantile of the limiting random variable \mathbb{Z}. [11] showed how to compute the quantiles of \mathbb{Z}, but \mathscr{C} contains the unknown parameters, F, f and g. The distribution F can be estimated with the NPMLE, but f and g are more difficult to estimate. They are usually estimated using kernel methods [see 2, 5], although parametric methods have also been proposed [see 2]. [5] show that this method can be improved by using transformations. Despite the improvement, the coverage can be very poor at the ends of the distribution and with smaller sample sizes. For example, [5] show situations with simulated coverage rates of the transformed Waldbased CIs of less than 80% for nominal 95% confidence intervals with sample sizes as large as 100.
A generally better method is to use a likelihood ratiobased test (LRT) for F(t). [1] introduced the LRT for F in current status data, derived its limiting distribution under continuous G, and then developed the CIs by inverting a series of point null hypothesis tests. Like the Waldbased CI (expression 1.2), the LRT CI has a nonstandard asymptotic distribution, except this distribution does not depend on unknown parameters. Thus, the 95% confidence interval only requires the 95th percentile of that distribution. The LRT method only needs the NPMLE and restricted NPMLEs of F(t). Unfortunately, just as with the Waldbased CI, the LRT CI can have lower coverage rates than the nominal rate at the edges of F when the sample size is small. [2] show through simulations that the LRT CIs perform better than the untransformed Waldbased CIs. Although [5] did not calculate LRT CIs for their simulations, they show that the transformed Waldbased CIs can have performance close to the LRT CIs in the case studied in [2]. Because of this we use the LRT CIs as a benchmark.
[3] introduced three score statistics for testing the hypothesis that H_{o}:F(t)=\theta_{o} assuming continuous failure and assessment distributions. They showed that the asymptotic distribution for all three score statistics is the same, but is different from those of the Wald statistics and LRT statistics. Simulations showed that the Wald tests were generally less powerful than the score tests and LRT statistics, and one version of the score test may have more power than the LRT in some situations. Despite these promising simulation results, as far as we are aware, the full development of the confidence intervals from the score tests and other systematic exploration of the properties of those scorebased confidence intervals have not been done. We will not discuss scorebased confidence intervals further.
[23] considered the case where the examination times lie on a grid and multiple subjects can share the same examination time. They discovered some interesting asymptotics based on defining the distance between grid points as \delta(n)=cn^{\gamma}, which changes with sample size n. The asymptotic distribution of the NPMLE converges to one of two distributions depending on whether \gamma<1/3 or \gamma>1/3, and has different behavior at the boundary. Furthermore, they developed an adaptive inference for F(t) which does not require the information about \gamma. However, this method is restricted to the specific case of equally spaced grid points, so will not be discussed further.
The nonparametric bootstrap approach on the NPMLE has similar coverage problems as the transformed Waldbased method at the edges of the distribution [see 5, Table 1]. A subsampling approach to the problem has been explored, but it can have very poor coverage in certain situations [see 2, Table 3].
Finally, there has been some recent theoretical work in smoothing maximum likelihood estimation assuming continuous assessment. [25] suggested two alternative estimators of F for current status data: maximum smoothed likelihood estimation (MSLE) and smoothed maximum likelihood estimation (SMLE). [25] derived the asymptotically mean squared error (MSE) optimal bandwidth, but that bandwidth depends on unknown nuisance parameters. [26, Section 9.5] showed how to construct a SMLEbased CI for F in current status data. They generated bootstrap samples with replacement from the original sample, then computed the bootstrap (1\alpha) intervals. However, in this method, it is difficult to estimate the actual bias term sufficiently accurately. Without the actual asymptotic bias term, the coverage rate may be lower than the nominal rate. We explored using this method [see supplemental article 15, Figure S.2], but it is difficult to automatically choose the bandwidth, and the coverage was not good. [12] showed that with certain regularity conditions, an empirical likelihoodbased method can be used on a smoothed survival estimate for current status data.
We propose a new framework for current status CIs based on binomial properties, and introduce both valid and approximately valid CIs within that framework. The valid CIs can be applicable to both discrete and continuous distributions of G with no distributional assumptions on F. The valid CIs guarantee coverage, at the cost of larger length of CIs. In the continuous case, the valid 100(1\alpha)\% CI for F(t) amounts to using the m assessment times just before and just after t (if they exist), counting the number of times the event occurs before each of those m assessments, and using those counts out of m from the valid lower or upper binomial confidence limits as the CI on F(t). We show that in the continuous case under some regularity conditions, those valid CIs are asymptotically accurate if and only if m\rightarrow\infty and m/n^{2/3}\rightarrow 0 as n\rightarrow\infty. Additionally, we show that we can get close to minimal widths when m=n^{2/3}. If F can be assumed smooth, then several approximate CIs are proposed that require estimates of the nuisance parameters (F(t), f(t) and g(t)). The best of the approximate CIs has generally better coverage with shorter length intervals than the likelihood ratiobased CIs.
The rest of this article is organized as follows. In Section 2, we introduce a class of valid CIs, and show a member of this class with asymptotically minimum length of the CI. Because those asymptotically minimum length CIs depend on unknown nuisance parameters, we perform calculations showing that a simple approximation depending only on sample size is close to the asymptotically minimum length CI in a variety of settings. In Section 3, we introduce approximate CIs based on the binomial framework (ABF CIs), and show conditions to asymptotically approach the nominal coverage. Since these ABF CIs may not be monotonic in t, we suggest adjustments for monotonicity. In Section 4 we perform simulations of three different scenarios, comparing three different types of CIs: valid CI, ABF CIs, and the LRT CI. Additionally, we perform extensive and systematic simulations comparing the LRT CI and the midP ABF CI. In Section 5 we apply these methods to hepatitis A data from Bulgaria ([14]). The conclusions are in Section 6, and all proofs are in the appendix.
2 VALID CONFIDENCE INTERVALS
2.1 General Class of Intervals
We first define a class of valid confidence intervals, and later consider subsets within that class with additional desirable properties besides validity.
Suppose the n event times are independent identically distributed (iid) from distribution F, the assessment times are iid from distribution G, and the assessments are independent of the event times. We index the assessments so they are ordered, writing them as C_{1}\leq C_{2}\leq\cdots\leq C_{n}, and we let T_{1},\cdots,T_{n} be the associated unobserved event times. Let D_{i}=1 if T_{i}\leq C_{i} and 0 otherwise, and we only observe C_{i} and D_{i}. The problem is to find a confidence interval for F(t) for fixed t.
Our strategy is to use the monotonicity of F and the fact that given C_{i}, the D_{i} are independent Bernoulli with parameter F(C_{i}). For a<b, let N(a,b) be the number of assessment times in the interval [a,b], and Y(a,b) be the number of deaths occurring by those N(a,b) assessment times:
N(a,b)=\sum_{i:C_{i}\in[a,b]}1\ \text{and}\ Y(a,b)=\sum_{i:C_{i}\in[a,b]}D_{i}. 
We will use the assessment times in the interval [a,t] to find a lower confidence limit for F(t).
We relate Y(a,t) to B, a binomial random variable with parameters \{N(a,t),F(t)\}. Write the 100(1\alpha)\% valid central confidence interval on F(t) for B given fixed N=N(a,t) as \left(L\{1\alpha/2;B,N\},U\{1\alpha/2;B,N\}\right). This is the usual valid (often called “exact”) binomial confidence interval developed in [6], and is the union of two onesided 1\alpha/2 intervals so that the CI is central, meaning it is twosided and the error is bounded by \alpha/2 on each side. Exploiting the connection between the binomial and beta distributions, we can express these limits as follows. Let Be(q;v,w) be the qth quantile from a beta distribution with nonnegative shape parameters v and w, and set Be\{q;0,w\}=\text{point mass at 0}; Be\{q;v,0\}=\text{point mass at 1}. The lower and upper limits for onesided 100q\% confidence intervals are given by
\begin{array}[]{l l}L\{q;B,N\}&=Be\{1q;B,NB+1\};\\ U\{q;B,N\}&=Be\{q;B+1,NB\}.\end{array}  (2.1) 
Although Y(a,t) is not binomial, it relates to B in the following manner. Let {\bf C}\equiv\left[C_{1},\ldots,C_{n}\right] be the ordered assessment vector. It is intuitively clear that for fixed a\leq t,
\text{Pr}[Y(a,t)\leq y\,\,{\bf C}]\geq\text{Pr}[B\leq y\,\,{\bf C}]~{}\text{% for all $y$} 
with probability 1, since for C_{i}\in[a,t], F(a)\leq F(C_{i})\leq F(t). This implies that
\begin{array}[]{l}\text{Pr}[L\{q;Y(a,t),N(a,t)\}\leq F(t)\,\,{\bf C}]\geq q% \mbox{ a.s.\ \ and\ \ }\\ \text{Pr}[L\{q;Y(a,t),N(a,t)\}\leq F(t)]\geq q,\end{array}  (2.2) 
where 0\leq q\leq 1. The proof is given in Appendix A.1. Note that (2.2) shows that the coverage probability is at least q, whether we think conditionally given the assessment times, or unconditionally by averaging over those assessment times. Conditional coverage is important if one focuses on ts for which there are multiple assessment times nearby, for example. In that case it would no longer suffice to have the right coverage averaged over the assessment time distribution.
Analogously, for fixed b\geq t, we use the N(t,b) assessment times in [t,b] and the Y(t,b) deaths by those times to form an upper confidence limit for F(t):
\begin{array}[]{l}\text{Pr}[F(t)\leq U\{q;Y(t,b),N(t,b)\}\,\,{\bf C}]\geq q% \mbox{ a.s.\ \ and\ \ }\\ \text{Pr}[F(t)\leq U\{q;Y(t,b),N(t,b)\}]\geq q.\end{array}  (2.3) 
Then for a\leq t\leq b, a valid central 100(1\alpha)% confidence interval about F(t) can be formed by combining the onesided limits from inequalities (2.2) and (2.3):
\displaystyle\left[L\{1\alpha/2;Y(a,t),N(a,t)\},\right.  \displaystyle\left.U\{1(\alpha/2);Y(t,b),N(t,b)\}\right]  (2.4) 
We plot an example of one of these intervals in Figure 1.
We mentioned earlier that one might focus on certain time points after observing the Cs. In other words, the a and b might not be constants fixed in advance, but functions of the Cs. The following theorem shows that this does not cause a problem.
Theorem 1a. Let a and b be known functions of only t, n, and \mathbb{C}=(C_{1},\ldots,C_{n}), such that a(t,n,\mathbb{C})\leq t\leq b(t,n,\mathbb{C}) with probability 1. Let Y_{a}^{t}=Y(a(t,n,\mathbb{C}),t), Y_{t}^{b}=Y(t,b(t,n,\mathbb{C})), and similarly for N^{t}_{a} and N^{b}_{t}. If L=L\left\{1\alpha/2;Y^{t}_{a},N^{t}_{a}\right\} and U=U\left\{1\alpha/2;Y^{b}_{t},N^{b}_{t}\right\}, then
\displaystyle\text{Pr}\left[L\leq F(t)\leq U\,\,{\bf C}\right]\geq 1\alpha{% \rm\ a.s.\ and\ \ }\text{Pr}\left[L\leq F(t)\leq U\right]\geq 1\alpha,  (2.5) 
for any n, and additionally (L,U) is central.
The proof is given in Appendix A.2.
Before discussing specific forms of the functions a and b, note that it is possible that L>U, where L and U are the lower and upper limits given in Theorem 1a. When L>U, we have the freedom to redefine the limits to whatever we want without violating validity. The redefined limits can even depend on Y_{a}^{t} and Y_{t}^{b}.
Theorem 1b. Let L\equiv L\left\{1\alpha/2;Y^{t}_{a},N^{t}_{a}\right\} and U\equiv U\left\{1\alpha/2;Y^{b}_{t},N^{b}_{t}\right\} as defined in Theorem 1a, with a(t,n,\mathbb{C})\leq t\leq b(t,n,\mathbb{C}). Let
\begin{array}[]{l}L^{*}=\left\{\begin{array}[]{l l}L&\quad\text{if}~{}L\leq U% \\ L_{M}&\quad\text{if}~{}L>U;\end{array}\right.\\ $~{}$\\ U^{*}=\left\{\begin{array}[]{l l}U&\quad\text{if}~{}L\leq U\\ U_{M}&\quad\text{if}~{}L>U.\end{array}\right.\end{array} 
where L_{M}\leq U_{M} can be any statistics. Then
\text{Pr}\{L^{*}\leq F(t)\leq U^{*}\}\geq 1\alpha  (2.6) 
for any n.
The result follows immediately from the fact that whenever the original interval [L,U] covers F, so does the modified interval.
Although setting L_{M}=U_{M}=\hat{F}(t) will give the minimum length CI for L_{M} and U_{M}, given a(t,n,\mathbb{C}) and b(t,n,\mathbb{C}), these intervals are not practical since in the continuous case, \text{Pr}\left\{F(t)\in\left[L_{M},U_{M}\right]L_{M}=U_{M}\right\}=0, and most users of the confidence interval would not accept L_{M}=U_{M} when L>U because of that zero conditional coverage.
In the next section we discuss choosing from among those intervals of Theorem 1a or 1b, some of which can have very wide expected length.
2.2 Asymptotic Properties for Nominal Coverage
In this section, we first introduce a specific form of the functions a and b, and then discuss conditions so that the asymptotic coverage goes to the nominal level.
For a fixed m given n, we consider two random points a and b defined by the C_{i}. Starting at point t, go backward in time to the mth closest C_{i} less than or equal to t (or backward to 0 if there are fewer than m points less than or equal to t). Denote that point as a=a(t,n,\mathbb{C}). Similarly, go forward in time from t to find the mth closest C_{i} greater than or equal to t (or \infty if fewer than m points are greater than or equal to t). More formally,
\begin{array}[]{l}a\equiv a(t,n,\mathbb{C})=\left\{\begin{array}[]{l l}0&\quad% \text{if $C_{m}>t$ }\\ C_{lm+1}&\quad\text{if $C_{m}\leq t$};\end{array}\right.\\ $~{}$\\ b\equiv b(t,n,\mathbb{C})=\left\{\begin{array}[]{l l}\infty&\quad\text{if $C_{% nm+1}<t$ }\\ C_{g+m1}&\quad\text{if $C_{nm+1}\geq t$,}\end{array}\right.\end{array}  (2.7) 
where m=m(n) is a function of n only, and m is a positive integer, l=\text{max}\{i:C_{i}\leq t\}, and g=\text{min}\{i:C_{i}\geq t\}. For convenience, let C_{0}=0 and C_{n+1}=\infty. If there are ties at a(t,n,\mathbb{C}) or b(t,n,\mathbb{C}) then we include all ties. Therefore there are at least m observations within [a(t,n,\mathbb{C}),t] and within [t,b(t,n,\mathbb{C})] when C_{m}\leq t\leq C_{nm+1}. If G is continuous and a(t,n,\mathbb{C})\neq 0, then N\{a(t,n,\mathbb{C}),t)\}=m with probability 1. Analogously, if G is continuous and b(t,n,\mathbb{C})\neq\infty, then N\{b(t,n,\mathbb{C}),t)\}=m with probability 1.
As was described in Section 2.1, it is possible that L>U. If L>U, then we use L_{M}\equiv L\{1(\alpha/2);Y_{a^{*}}^{b^{*}},N_{a^{*}}^{b^{*}}\} and U_{M}\equiv U\{1(\alpha/2);Y_{a^{*}}^{b^{*}},N_{a^{*}}^{b^{*}}\}, where a^{*}\neq b^{*} are specified in [15, Section S.1]. Essentially, instead of using separate proportions of m observations less than t and m observations greater than t to form the lower and upper confidence limits, we use a single proportion combining m/2 observations less than, and m/2 observations greater than, t.
With these specific forms of the functions a and b, the confidence interval constructed with any m is valid. An additional desirable property on the function m(n) is that the resulting confidence intervals are asymptotically accurate, meaning that the the coverage probability converges to the desired level. This property can be met at the support of G for discrete G if m(n)\equiv m_{n} has the following two conditions:
Properties 1.
~{}
\lim_{n\rightarrow\infty}(m_{n}/n)=0;
\lim_{n\rightarrow\infty}(m_{n})=\infty.
Theorem 2.1. If
Properties 1
are satisfied, and G is discrete, then
the coverage rate of both (2.5) and (2.6) are 1\alpha as n\rightarrow\infty at each atom of G.
The proof is given in Appendix A.3.
Hereafter, we assume that G is continuous. If C_{m_{n}}\leq t then
N_{a}^{t}=m_{n}\mbox{ and }Y^{t}_{a}=\sum^{l_{n}}_{i=l_{n}m_{n}+1}D_{i},  (2.8) 
and if a(t,n,\mathbb{C})\leq C_{i}\leq t then D_{i}C_{i}\sim\text{Bernoulli}\left\{F(C_{i})\right\} where F\{a(t,n,\mathbb{C})\}\leq F(C_{i})\leq F(t) for i=(l_{n}m_{n}+1)\ldots l_{n}. We have noted previously that the conditional distribution of Y_{a}^{t} given {\bf C} is stochastically between a binomial (m_{n},F(a_{n})) and a binomial (m_{n},F(t)). If a_{n}\rightarrow t fast enough, we should be able to approximate both of these binomial distributions with normals with means m_{n}F(t) and variances m_{n}F(t)\{1F(t)\}, which would guarantee asymptotic accuracy of the lower confidence limit, and similarly for the upper limit. We seek conditions under which this holds.
Let W_{m_{n}} and W_{m_{n}}^{\prime} denote binomials with parameters (m_{n},F(t)) and (m_{n},F(a_{n})), respectively. By the central limit theorem, Z_{n}=\{W_{m_{n}}m_{n}F(t)\}/[m_{n}F(t)\{1F(t)\}]^{1/2} converges in distribution to a standard normal. Call Z_{n}^{\prime}=\{W_{m_{n}}^{\prime}m_{n}F(t)\}/[m_{n}F(t)\{1F(t)\}]^{1/2} the lower standardized deviate. Similarly, if W_{m_{n}}^{\prime\prime} denotes a binomial with parameters (m_{n},F(b_{n})), call Z_{n}^{\prime\prime}=\{W_{m_{n}}^{\prime\prime}m_{n}F(t)\}/[m_{n}F(t)\{1F(t)%
\}]^{1/2} the upper standardized deviate. We want to know when the lower and upper standardized deviates are both asymptotically standard normal, which would gurantee asymptotic accuracy.
Theorem 2.2. Assume that F and G are continuous and, at the point t, F^{\prime}(t)=f(t)>0 and G^{\prime}(t)=g(t)>0. Assume further that
m_{n}\rightarrow\infty and m_{n}/n\rightarrow 0. Then the lower and upper standardized deviates converge in distribution to standard normals (which guarantees that the conditional and unconditional coverage both tend to 1\alpha as n\rightarrow\infty) if and only if m_{n}/n^{2/3}\rightarrow 0 as n\rightarrow\infty.
The proof is given in Appendix A.4.
2.3 Choice of m_{n}
Although Theorems 2.1 and 2.2 give conditions on m_{n} that lead to asymptotically accurate coverage, there is quite a range of functions m(n) that lead to asymptotic accuracy. Further, since Theorem 1 shows guaranteed nominal coverage for a wider class of intervals, within this wider class the only error in coverage will be higher (i.e., better) coverage. So practically speaking, for choosing m_{n} we focus in this section not on coverage, but on minimum expected length.
We motivate a simple m(n) function using three steps. First, we motivate more accurate binomial approximations for Y_{a}^{t} and Y_{t}^{b} than those used in Theorem 2.2. These approximations are based on n, F(t) and r(t)=f(t)/g(t) only. Second, through numerical search we find the m(n) that gives the lowest expected length 95% confidence interval for several n, F(t) and r(t) values. Third, we show that m(n)=n^{2/3} is close to that minimum when r(t)=1, and the expected length is close to the minimum expected length for 1/2<r(t)<2.
In Theorem 2.2, we approximated the distribution of Y_{a}^{t} by a binomial with parameters (m_{n},F(t)). The following heuristic argument gives a more accurate approximation to the distribution function for Y_{a}^{t}.
Assuming G is continuous, for \mathbb{C} fixed and C^{*}\sim G for all j, m_{n} is approximately n \times Pr{a(t,n,\mathbb{C})\leq C^{*}\leq t}=n[G(t)G\{a(t,n,\mathbb{C})\}]. Also, G\{a(t,n,\mathbb{C})\}=G(t)\{ta(t,n,\mathbb{C})\}g(t)+o\{ta(t,n,\mathbb{C})\} as a(t,n,\mathbb{C})\rightarrow t because G^{\prime}(t)=g(t). Using the approximation G(t)G\{a(t,n,\mathbb{C})\}\approx g(t)\{ta(t,n,\mathbb{C})\}, we can write m_{n} as m_{n}\approx ng(t)\{ta(t,n,\mathbb{C})\}, implying that
\{ta(t,n,\mathbb{C})\}\approx\frac{m_{n}}{ng(t)}.  (2.9) 
Likewise, F\{a(t,n,\mathbb{C})\}=F(t)\{ta(t,n,\mathbb{C})\}f(t)+o\{a(t,n,\mathbb{C})\} as a(t,n,\mathbb{C})\rightarrow t, so
F\{a(t,n,\mathbb{C})\}\approx F(t)\{ta(t,n,\mathbb{C})\}f(t)  (2.10) 
for large n. Using (2.9) and (2.10), we can approximate F\{a(t,n,\mathbb{C})\} for large n as follows:
F\{a(t,n,\mathbb{C})\}\approx F(t)\left\{\frac{m_{n}}{ng(t)}\right\}f(t).  (2.11) 
Analogously, with approximations similar to those used for (2.11), F\{b(t,n,\mathbb{C})\} can be written as
F\{b(t,n,\mathbb{C})\}\approx F(t)+\left\{\frac{m_{n}}{ng(t)}\right\}f(t).  (2.12) 
Then we approximate the distribution of Y_{a}^{t} as
Y_{a}^{t}~{}\dot{\sim}~{}\text{Binomial}(m_{n},F^{}_{t})  (2.13) 
where F^{}_{t} is the midpoint of F\{a(t,n,\mathbb{C})\} and F(t):
F^{}_{t}=\frac{F\{a(t,n,\mathbb{C})\}+F(t)}{2}.  (2.14) 
Using (2.11) and (2.14), we can express (2.13) as
Y_{a}^{t}~{}\dot{\sim}~{}\text{Binomial}\left[m_{n},F(t)\left\{\frac{m_{n}}{2% ng(t)}\right\}f(t)\right].  (2.15) 
Analogously,
Y_{t}^{b}~{}\dot{\sim}~{}\text{Binomial}\left[m_{n},F(t)+\left\{\frac{m_{n}}{2% ng(t)}\right\}f(t)\right].  (2.16) 
In Table 1 we give m_{min}, the m_{n} that gives the minimum expected 95% confidence interval length using the ClopperPearson intervals associated with approximations (2.15) and (2.16) for different values of F(t), r(t) and n. Given m_{n} we calculate the expected 95% confidence interval length by subtracting the weighted average of the m_{n}+1 possible values for U(0.975;Y_{t}^{b},m_{n}) from those for L(0.975;Y_{a}^{t},m_{n}), weighted by the appropriate binomial probabilities (see expressions 2.15 and 2.16). We find m_{min} by exhaustive computer search. We see that for r(t)=1 it appears that \lceil n^{2/3}\rceil is a good estimator of m_{min}. For r(t)\neq 1 then \lceil n^{2/3}\rceil is a much poorer estimator. However, even though \lceil n^{2/3}\rceil is not close to m_{min}, we find that the expected 95% confidence interval length is not too much inflated by using the suboptimal \lceil n^{2/3}\rceil for m_{n}. Table 1 gives the ratio of the expected 95% confidence interval length when m=\lceil n^{2/3}\rceil over the expected length at m_{n}=m_{min}, and we see that the expected inflation is 8% or less for all the situations explored (r(t)=1/2,r(t)=1 and r(t)=2). The same calculations for 90% confidence intervals have similar m_{min} and expected inflation of 12% of less for the same explored situations (not shown).
r(t)=1  r(t)=.5  r(t)=2  

n  \lceil n^{2/3}\rceil  F(t)=0.5  F(t)=0.75  F(t)=0.5  F(t)=0.75  F(t)=0.5  F(t)=0.75 
100  22  22 (1.00)  21 (1.00)  31 (1.04)  31 (1.03)  13 (1.06)  13 (1.05) 
200  35  35 (1.00)  33 (1.00)  53 (1.05)  52 (1.03)  22 (1.06)  21 (1.06) 
500  63  65 (1.00)  60 (1.00)  103 (1.06)  95 (1.04)  41 (1.05)  38 (1.07) 
1000  100  103 (1.00)  95 (1.00)  162 (1.06)  149 (1.04)  65 (1.05)  60 (1.07) 
2000  159  162 (1.00)  149 (1.00)  257 (1.05)  235 (1.04)  103 (1.05)  95 (1.07) 
5000  293  297 (1.00)  272 (1.00)  470 (1.05)  430 (1.04)  188 (1.05)  173 (1.08) 
10000  465  470 (1.00)  430 (1.00)  743 (1.05)  678 (1.03)  297 (1.05)  272 (1.08) 
To explore how well the approximation does in picking m_{min}, we simulated 10,000 confidence intervals at F(t)=.5 when F=G with n=1,000. For the simulations we used F=G are both exponential with mean 1, but because the relationship between F and G is all that matters in the continuous case, we would get the same results when both F and G are the same continuous distribution. Figure 2 shows the average length of the CIs of 10,000 simulated confidence intervals with various ms (m=1,\ldots,400). We see that the value m_{n} that gives minimum simulated confidence interval length (m_{n}=95 for 90% confidence level, and m_{n}=99 for 95% confidence level) is close to n^{2/3}=100, and that the expected length does not change much around that value.
3 CONFIDENCE INTERVALS WITH COVERAGE CLOSE TO NOMINAL
3.1 Notation and a Theorem
Up to now, we have considered a conservative valid method which, for continuous assessments away from the boundaries (see (2.7) and discussion afterward), uses the m_{n} observations closest to t and less than or equal to t for the lower limit and analogously uses m_{n} observations closest to t and greater than or equal to t for the upper limit. In this section, we construct less conservative confidence intervals by relaxing the requirement for guaranteed coverage. Instead of using separate proportions to construct lower and upper limits, we use a single proportion to construct both. We call the resulting intervals approximate binomial framework (ABF) CIs. The ABF CIs will have smaller length CI, but will no longer guarantee coverage.
In this section, assume G is continuous. Now we develop intervals with approximate coverage using observations on both sides of t to create both confidence limits at once. In this section, if we are away from the boundaries, then we let m_{n}=m(n) be a positive even number of observations used to calculate the limits, with m_{n}/2\leq t and m_{n}/2>t, and we use the closest m_{n}/2 observations to t on either side of t. Close to the boundaries, we modify m_{n} to keep equal numbers on both sides of t, using m^{\dagger}_{n} observations, where
m^{\dagger}_{n}=2\left[\text{min}\{\lceil m_{n}/2\rceil,l_{n},(ng_{n}+1)\}% \right], 
where l_{n}=\text{max}\{i:C_{i}\leq t\}, g_{n}=\text{min}\{i:C_{i}\geq t\}, and m^{\dagger}_{n}\leq n. Define
a^{\dagger}\equiv a^{\dagger}(t,n,\mathbb{C})=C_{l_{n}(m^{\dagger}_{n}/2)+1},% ~{}\text{and}~{}b^{\dagger}\equiv b^{\dagger}(t,n,\mathbb{C})=C_{g_{n}+(m^{% \dagger}_{n}/2)1}. 
Then if G is continuous, there are (m^{\dagger}_{n}/2) observations in [a^{\dagger},t] and [t,b^{\dagger}]. The value m^{\dagger}_{n} may be very small at t where F(t)\approx 0 or 1. We adjust the confidence interval for this in Section 3.3.
Analogously to before, we use the form of the ClopperPearson two sided 100(1\alpha)% confidence interval functions (i.e., L and U as in (2.1)), except now we use Y_{a^{\dagger}}^{b^{\dagger}} and N_{a^{\dagger}}^{b^{\dagger}}=m^{\dagger}_{n}. Specifically, the 100(1\alpha)\% interval is
\displaystyle\left[L\{1\alpha/2;Y^{b^{\dagger}}_{a^{\dagger}},m^{\dagger}_{n}% \},\right.  \displaystyle\left.U\{1(\alpha/2);Y^{b^{\dagger}}_{a^{\dagger}},m^{\dagger}_{% n}\}\right].  (3.1) 
Theorem 3. Under the conditions of Theorem 2.2, the conditional (on {\bf C}) and unconditional coverage rates of (3.1) tend to 1\alpha as n\rightarrow\infty.
The proof is not given since it is very similar to that of Theorem 2.2.
Another adjustment is the midp ABF CIs, defined by replacing the functions L and U in equation (3.1) with the midP binomial confidence limit functions, L_{mid} and U_{mid}, defined in [15, Section S.2]. Since the midp ABF CIs and usual ABF CIs are asymptotically equivalent, we work with L and U in the following sections.
3.2 Optimal m^{\dagger}_{n} Observations Surrounding t for the Confidence Interval about F(t)
In Section 2.3 we found the m_{n} that minimized the expected length based on a linear approximation to the F(C_{i}) values close to F(t) (see (2.15) and (2.16)). Because of the validity requirement, the expected proportion of events was biased since E(Y_{a}^{t}/m_{n})<F(t) and E(Y_{t}^{b}/m_{n})>F(t), even though the bias decreased with increasing m_{n}. For fixed n, increasing m decreases the variance but increases the bias. Thus, we could solve for minimum expected confidence interval length. For this section, there is no inherent bias, and we cannot solve for minimizing the expected confidence interval length based on the linear approximation, since that approximation would suggest using m=n to minimize the variance. Instead we solve for an m_{n} using two expected mean squared errors (MSEs).
Let the sum of the expected MSEs as a function of m^{\dagger}_{n} be
\displaystyle Q(m_{n}^{\dagger})  \displaystyle=  \displaystyle\text{E}\left[\left\{Y_{a^{\dagger}}^{t}/(m^{\dagger}_{n}/2)F(t)% \right\}^{2}+\left\{Y_{t}^{b^{\dagger}}/(m^{\dagger}_{n}/2)F(t)\right\}^{2}\right] 
and let
\displaystyle m^{\dagger*}_{n}  \displaystyle=  \displaystyle\underset{m^{\dagger}_{n}}{\operatorname{argmin}}~{}Q(m_{n}^{% \dagger}). 
Using approximations similar to (2.15) and (2.16), we approximate the distributions of Y_{a^{\dagger}}^{t} and Y_{t}^{b^{\dagger}} as
Y_{a^{\dagger}}^{t}~{}\dot{\sim}~{}\text{Binomial}\left[\frac{m^{\dagger}_{n}}% {2},F(t)\left\{\frac{m^{\dagger}_{n}f(t)}{4ng(t)}\right\}\right];Y_{t}^{b^{% \dagger}}~{}\dot{\sim}~{}\text{Binomial}\left[\frac{m^{\dagger}_{n}}{2},F(t)+% \left\{\frac{m^{\dagger}_{n}f(t)}{4ng(t)}\right\}\right].  (3.2) 
This gives the approximation,
\begin{array}[]{l l}Q(m_{n}^{\dagger})&\approx 2\left[\frac{2F(t)}{m_{n}^{% \dagger}}\frac{2\{F(t)\}^{2}}{m_{n}^{\dagger}}2\left(\frac{f(t)}{4ng(t)}% \right)^{2}m_{n}^{\dagger}+\left(\frac{f(t)}{4ng(t)}\right)^{2}\left(m_{n}^{% \dagger}\right)^{2}\right].\end{array}  (3.3) 
After taking the derivative of (3.3) with respect to m_{n}^{\dagger} and then setting it to zero, we can find the numerical solution of m_{n}^{\dagger*} from
\frac{dQ(m_{n}^{\dagger})}{dm_{n}^{\dagger}}\approx\left\{\frac{f(t)}{4ng(t)}% \right\}^{2}(m_{n}^{\dagger})^{3}\left\{\frac{f(t)}{4ng(t)}\right\}^{2}(m_{n}% ^{\dagger})^{2}F(t)+\left\{F(t)\right\}^{2}=0.  (3.4) 
From Cardano’s formula ([4]), we can compute the order of m_{n}^{\dagger*}:
\begin{array}[]{l l}m_{n}^{\dagger*}&\approx\sqrt[3]{\left(\frac{1}{27}+\frac{% F(t)\{F(t)\}^{2}}{2[\{f(t)\}^{2}/\{4ng(t)\}^{2}]}\right)+\sqrt{\left(\frac{1}% {27}+\frac{F(t)\{F(t)\}^{2}}{2[\{f(t)\}^{2}/\{4ng(t)\}^{2}]}\right)^{2}\left% (\frac{1}{9}\right)^{3}}}\\ &+\sqrt[3]{\left(\frac{1}{27}+\frac{F(t)\{F(t)\}^{2}}{2[\{f(t)\}^{2}/\{4ng(t)% \}^{2}]}\right)\sqrt{\left(\frac{1}{27}+\frac{F(t)\{F(t)\}^{2}}{2[\{f(t)\}^{% 2}/\{4ng(t)\}^{2}]}\right)^{2}\left(\frac{1}{9}\right)^{3}}}+\left(\frac{1}{3% }\right)\\ &=O(n^{2/3}),\end{array} 
which is a real number. This method yields a similar conclusion that m_{n} should be of the order n^{2/3}.
To estimate m^{\dagger*}_{n}, we need estimates of F(t), f(t) and g(t). The value g(t) can be estimated by kernel density estimation with assessment times C_{i}, i=1\ldots n. But to estimate F(t) and f(t), we use a slightly modified version of the smoothed maximum likelihood estimation (SMLE) introduced by [25]. Details are in [15, Section S.3].
3.3 Confidence Interval with Monotonic Adjustments
Before describing adjustments for monotonicity, we introduce an additional practical adjustment. We set the lower confidence limit to 0 when NPMLE \hat{F}_{n}(t)=0 and the upper confidence limit to 1 when NPMLE \hat{F}_{n}(t)=1. This adjustment was motivated by preliminary simulations which showed that the edges of the distribution had poor coverage. Besides leading to better coverage, it ensures that the confidence limits enclose the NPMLE when it reaches those extremes.
Note that we assume that F(t) is a monotonically increasing function of t. However the lower and upper limits of the confidence interval (3.1) are not necessarily monotonically increasing functions of t. In this section, we consider two adjustments to construct monotonically increasing lower and upper limits of F(t).
Suppose that the goal is to construct the monotonically increasing k^{\prime} pointwise confidence intervals about F(t_{i}) where i=1\ldots k^{\prime}; 0<t_{1}\leq t_{2}\leq\ldots t_{k^{\prime}}<\infty. Let the lower limit and upper limit of the confidence interval at t=t_{i} be L_{F(t_{i})} and U_{F(t_{i})}, respectively.
First we consider the edge adjustment. Let L_{F(t^{\text{min}}_{\text{L}})}=\text{min}\left\{L_{F(t)}\right\} for 0<t\leq t_{\lceil k^{\prime}/2\rceil}, and L_{F(t^{\text{max}}_{\text{L}})}=\text{max}\left\{L_{F(t)}\right\} for t_{\lceil k^{\prime}/2\rceil}\leq t<\infty. Then set L_{F(t)}=L_{F(t^{\text{min}}_{\text{L}})} for t\leq t^{\text{min}}_{\text{L}}, and L_{F(t)}=L_{F(t^{\text{max}}_{\text{L}})} for t^{\text{max}}_{\text{L}}\leq t. Analogously, let U_{F(t^{\text{min}}_{\text{U}})}=\text{min}\left\{U_{F(t)}\right\} for 0<t\leq t_{\lceil k^{\prime}/2\rceil}, and U_{F(t^{\text{max}}_{\text{U}})}=\text{max}\left\{U_{F(t)}\right\} for t_{\lceil k^{\prime}/2\rceil}\leq t<\infty. Then set U_{F(t)}=U_{F(t^{\text{min}}_{\text{U}})} for t\leq t^{\text{min}}_{\text{U}}, and U_{F(t)}=U_{F(t^{\text{max}}_{\text{U}})} for t^{\text{max}}_{\text{U}}\leq t [see the second panel in Figure S1 15].
Now consider an adjustment we call the lowerupper adjustment. For the lower limit, if t_{i}\leq t_{i+1}, but L_{F(t_{i+1})}\leq L_{F(t_{i})} then we replace L_{F(t_{i+1})} with L_{F(t_{i})}. We start this lower limit adjustment from the smallest t and proceed to the largest t. For the upper limit, if t_{i}\leq t_{i+1}, but U_{F(t_{i+1})}\leq U_{F(t_{i})} then we replace U_{F(t_{i})} with U_{F(t_{i+1})}. We start this upper limit adjustment from the largest t and proceed to the smallest t. Then lower and upper limits of F(t) become monotonically increasing functions, and this adjustment shortens the length of the confidence interval [see the third panel in Figure S1 15].
Now consider the “middle value” adjustment. Let L^{\text{1}}_{F(t)} be the lower limit function from the lowerupper adjustment just described. Let L^{\text{2}}_{F(t)}, be similar except we start at the opposite end. As before, if t_{i}\leq t_{i+1}, but L_{F(t_{i+1})}\leq L_{F(t_{i})} then L^{\text{2}}_{F(t_{i+1})} is defined as L_{F(t_{i})}, but we start this adjustment from the largest t proceeding to the smallest t. Then the lower limit with the middle value adjustment is L^{\text{M}}_{F(t)}=\{L^{\text{1}}_{F(t)}+L^{\text{2}}_{F(t)}\}/2. Analogously, we define U^{\text{1}}_{F(t)} as an upper limit ensuring monotonicity by proceeding from the smallest to the largest t, and define U^{\text{2}}_{F(t)} as the upper limit ensuring monotonicity by proceeding from the largest to the smallest t. Then the upper limit with the middle value adjustment is U^{\text{M}}_{F(t)}=\{U^{\text{1}}_{F(t)}+U^{\text{2}}_{F(t)}\}/2 [see the fourth panel in Figure S1 15]. Then lower and upper limits of F(t) with the middle value adjustment are monotonically increasing functions.
4 Simulation Studies
4.1 Simulation 1
In this section, we perform simulation studies. We begin with a simulation described as Case 1 (f(t) is Exp(1), and g(t) is Exp(1) for 0\leq t<\infty) with n=1,000, and using confidence interval methods described in [26, Section 9.5]. Specifically, we set a=0 and b as the maximum of the assessment times (see their equation 9.75), We used the triweight kernel and set the bandwidth to h=F^{1}(0.99)n^{1/4}, where here F^{1}(0.99)=4.605 which comes from the true distribution, so h\approx\text{(Range of the assessment times)}\times n^{1/4}. We generated 1,000 bootstrap samples, and for the 95% confidence interval we used the 20th and 980th of the bootstrap samples [to adjust for undercoverage, see 26, p. 272]. Despite these adjustments, there was substantial undercoverage [see 15, Figure S2]. There could be other choices for how to implement those confidence intervals, but since this implementation did not perform well, we do not include these methods in the full simulation results.
For the full simulation, we consider three possible cases:

f(t) is Exp(1), f(t)=exp(t), and g(t) is Exp(1), g(t)=\exp(t) for 0\leq t<\infty;

f(t) is Gamma(3,1), f(t)=[1/\{3\Gamma(1)\}]\exp\{(t/3)\} for 0\leq t<\infty, and g(t) is Unif(0,5), g(t)=t/5 for 0\leq t\leq 5;

f(t) is the mixure of Gamma(3,1) and Weibull(8,10),
f(t)=.5[1/\{3\Gamma(1)\}]\exp\{(t/3)\}+.5\{(8/10)(t/10)^{7}\}\exp\{(t/10)^{8}\} for 0\leq t<\infty, and g(t) is Unif(0,15), g(t)=t/15 for 0\leq t\leq 15.
Then the twosided 95% CIs (3.1) have been constructed about 0<F(t)<1. We use the likelihood ratiobased CI introduced by [1] as a benchmark, to compare to our new methods.
In Figure 3, we plot the simulated coverage and the averaged lengths of the six different CIs for 0<F(t)<1: the likelihood ratiobased CI, the ABF CI with edge & lowerupper adjustment, the midP ABF CI with edge & lowerupper adjustment, the ABF CI with edge & middle value adjustment, the midP ABF CI with edge & middle value adjustment, and the valid CI with m_{n}=n^{2/3}. Each simulation used 10,000 replications and had n=50. The cases n=200 and n=1,000 are plotted in Figures S3 and S4 of [15].
Figures show that the ABF CIs have generally shorter length than the likelihood ratiobased CIs, but have better coverage. When the likelihood ratiobased CIs have adequate coverage, the ABF CIs have shorter length. The ABF CIs seem to be conservative with small n (e.g. n=50), but this conservativeness can be eliminated by using the midP approach.
Since the lowerupper adjustment shortens the length of the CI, the ABF CI with the lowerupper adjustment has relatively shorter length than the ABF CI with edge and middle value adjustment. We do not recommend using the midP approach with the lowerupper adjustment simultaneously, since that combination doubly shortens the length of the CI to such a degree that the coverage is poor (see Case 3 in Figure 3). Therefore, we recommend using either the ABF CI with edge and lowerupper adjustment or the midP ABF CI with edge and middle value adjustment.
4.2 Simulation 2
In this section, we consider more extensive and systematic simulations. We assume nine possible scenarios assuming that g(t)\sim\text{Unif}(0,1) and f(t)\sim\text{Beta}(\alpha,\beta) where \alpha=1,\beta=50; \alpha=1,\beta=7; \alpha=1,\beta=2; \alpha=1,\beta=1; \alpha=2,\beta=1; \alpha=7,\beta=1; \alpha=50,\beta=1; \alpha=100,\beta=100; and \alpha=.1,\beta=.1. Figure 4 shows simulated coverage and averaged lengths of 95 % CIs based on the likelihood ratiobased CI and the midP ABF CI with edge and middle value adjustment when n = 50. The ABF CIs have generally shorter length than the likelihood ratiobased CIs, but have better coverage. When the function F(t) rises very steeply (Scenario 8), the ABF CI has poor coverage at the areas of big changes of the slope. However, the coverage approaches the nominal rate as n becomes larger. Figure S5 and S6 in [15] show simulated coverage and average lengths of 95 % CIs when n=200 and n=1,000.
[2] considered a setting when t lies in a region of steep ascent of the distribution function F(t). They assumed g\sim\text{Unif}(0,1) and
F(t)=\left\{\begin{array}[]{l l}t&\quad\text{for $t\leq.25$ };\\ .25+(20,000)(t.25)^{2}&\quad\text{for $.25<t\leq\left(.25+\frac{1}{200}\right% )$};\\ .75+\frac{.25}{(.75\frac{1}{200})}(t.25\frac{1}{200})&\quad\text{for $\left% (.25+\frac{1}{200}\right)<t\leq 1$}.\end{array}\right. 
This case is similar to Scenario 8. However, in this case, there are two points with discontinuous derivatives: when F(t)=.25 and F(t)=.75. Figure S7 in [15] shows simulated coverage and average lengths of 95 % CIs when n=50, 200, and 1,000. The ABF CI has very poor coverage around the points with discontinuous derivatives. Even when n becomes larger, the coverage is still poor around those points. However the ABF CI has good coverage at the edges, and in the center. This simulation setting tells us that the ABF CI can perform poorly when in areas where F(t) changes very dramatically with possibly discontinuous derivatives.
5 Analyzing the hepatitis A data in Bulgaria
[14] analyzed data on antihepatitis A antibody responses in Bulgaria. For the purpose of this analysis, we assume that once a person has been infected with hepatitis A, that person will test positive for antihepatitis A antibodies throughout the remainder of his or her life. Further, we assume that the force of infection of hepatitis A does not change over time. Thus, a crosssectional sample can be interpreted as current status data, where the time scale is age and the event is a positive test for antihepatitis A antibodies. Table 2 in [14] contains the data, consisting of 850 people whose ages range from 186 years. At each single year age group we have the number of people tested and the number of those who tested positive (a few ages had no one tested). The main goal here is to construct the pointwise confidence intervals of the distribution of age at which people were first exposed to hepatitis A. Based on this current status data, we constructed the likelihood ratiobased CI, the midP ABF CI with edge and middle value adjustment, and the valid CI with m_{n}=n^{2/3}. In Figure 5, the midP ABF CIs are seen to be shorter than the likelihood ratiobased CIs in the middle of the age range. The midP ABF CIs are slightly wider than the likelihood ratiobased CIs at the right edge (especially, when the NPMLE, \hat{F}(t)=1). Some midP ABF CIs with edge and middle value adjustment do not contain NPMLE values (see the middle panel in Figure 5). This may happen for some points of t because the ABF CI is based on the local binomialtype responses, not the NPMLE.
6 CONCLUSION
We introduced a new framework for CI with current status data. We developed two new types of CIs: the valid CI and the ABF CI. The valid CI guarantees the nominal coverage rate and approaches the nominal rate if m_{n} satisfies the asymptotic conditions. The valid method is simple and can be applied with continuous or discrete assessment distributions. The ABF CI does not guarantee the nominal rate, but its coverage rate asymptotically approaches the nominal rate if m^{\dagger}_{n} satisfies the asymptotic conditions. In a series of simulations, we compare our new CIs to the LRT CI, and no one method outperforms the others in every situation. When guaranteed coverage is needed, then the valid CIs are recommended. When an approximation with shorter confidence interval lengths is acceptable, then either the LRT CI or the ABF CI are appropriate. When the failure time distribution is changing rapidly in an area where there is not a high density of assessments, then the LRT CI often has coverage closer to the nominal than the ABF CI; however, in most other cases the ABF CI showed better coverage than the LRT CI, especially in the areas away from the middle of the distribution.
Appendix A PROOFS
A.1 Proof of Equation 2.2
Given C_{i}=c_{i},
D_{i}C_{i}=c_{i}~{}\sim~{}\text{Bernoulli}~{}\{F(c_{i})\}. 
Let D_{i}^{*} be a random variable such that D_{i}^{*}~{}\sim~{}\text{Bernoulli}~{}\{F(t)\}. The C_{i} used for the lower bound for F(t) are \leq t, so F(c_{i})\leq F(t). It is clear that a Bernoulli(p) random variable becomes stochastically larger as p increases, so D_{i}\,\,C_{i}=c_{i} is less than or equal to a Bernoulli with parameter F(t) in the stochastic order ([24]), which we write D_{i}\,\,C_{i}=c_{i}\preceq D_{i}^{*}.
Since D_{i} and D^{*}_{i} are independent sets of random variables with D_{i}\preceq D_{i}^{*} for each i, Y(a,t)\mathbb{C}=\sum_{i:C_{i}\in[a,t]}D_{i}C_{i} \preceq B\mathbb{C} \sim Binomial{N(a,t), F(t)} (see theorem 1.A.3, part b, of [20]).
Let
\begin{array}[]{l}g(y)=L(q;y,n)=\left\{\begin{array}[]{l l}0&\quad\text{if $y=% 0$}\\ Be\{1q;y,ny+1\}&\quad\text{if $y>0$}\end{array}\right.\end{array} 
be the lower qth onesided ClopperPearson confidence interval. Since L(q;y,n) is a monotonic increasing function of y given fixed q and n, g(Y(a,t)\mathbb{c}) \preceq g(B\mathbb{c}) outside a null set of {\bf c} values. Therefore, for given q and N(a,t), outside a null set of {\bf c} values,
\text{Pr}[L\{q;Y(a,t),N(a,t)\}\leq F(t)\mathbb{c}]\geq\text{Pr}[L\{q;B,N(a,t)% \}\leq F(t)\mathbb{c}]\geq q. 
Now take the expectation of each side over the distribution of {\bf C} to conclude that
\text{Pr}[L\{q;Y(a,t),N(a,t)\}\leq F(t)]\geq q.~{}~{}~{}\Box 
A.2 Proof of Theorem 1a
This follows by conditioning on {\mathbb{C}} and noting that (2.2) and (2.3) include conditional probabilities given {\mathbb{C}}. With probability 1, the conditional probability that either L>F(t) or U<F(t) is no greater than the sum of these probabilities, which is no greater than \alpha/2+\alpha/2=\alpha. Because the conditional coverage probability is at least 1\alpha (almost surely), the unconditional probability, namely the expectation of the conditional probability, is at least 1\alpha as well. Centrality follows because each of the error probabilities is less than or equal to \alpha/2. \Box
A.3 Proof of Theorem 2.1
First consider the coverage rate of (2.5). Let C_{1}^{\prime},C_{2}^{\prime},\ldots be independent and identically distributed from G (i.e., they are the unordered assessment times). Let c^{\prime} be an atom of G, and let p_{c^{\prime}}>0 be its probability. By the strong law of large numbers, the sample proportion of C_{1}^{\prime},\ldots,C_{n}^{\prime} that equal c^{\prime} converges to p_{c^{\prime}} with probability 1. Consider an \omega for which this happens. Because m_{n}/n\rightarrow 0 by assumption, the number of C_{1}^{\prime},\ldots,C_{n}^{\prime} equaling c^{\prime} will exceed m_{n} for all but a finite number of n. Therefore, if t=c^{\prime} then a_{n} will equal t for all but finitely many n. Let D_{i}^{\prime} be the D_{j} associated with C_{i}^{\prime}. Then there are at least m_{n} D_{i}^{\prime} values with C_{i}^{\prime}=t corresponding to iid Bernoullis with probability F(t). The lower limit is the same as the one based on the empirical distribution function from a sample of at least m_{n} iid observations from distribution F, and similarly for the upper limit. It is known that the coverage probability for a confidence interval based on an empirical distribution function converges to 1\alpha as the sample size tends to \infty.
We have shown that, with probability 1, the conditional coverage probability in (2.5) tends to 1\alpha as n\rightarrow\infty at each atom of G. The unconditional coverage probability, namely the expectation of the conditional coverage probability, also tends to 1\alpha at each atom of G by the bounded convergence theorem.
Now consider the coverage rate of (2.6). We have already shown that, except for finitely many n, the lower and upper intervals correspond to those using the empirical distribution function of a sample of m_{n} from F, in which case the lower limit cannot exceed the upper limit. Therefore, the coverage probability of the modified interval [L^{*},U^{*}] tends to 1\alpha as n\rightarrow\infty as well. \Box
A.4 Proof of Theorem 2.2
In our notation, C_{1}\leq C_{2}\leq\ldots\leq C_{n} are ordered and T_{i} is the survival time associated with C_{i}. It is also convenient to think instead of the infinite set of iid pairs (C_{1}^{\prime},T_{1}^{\prime}),(C_{2}^{\prime},T_{2}^{\prime}),\ldots, where the C_{i}^{\prime} are unordered. Thus, C_{1},\ldots,C_{n} are the order statistics of C_{1}^{\prime},\ldots,C_{n}^{\prime}, and T_{i}^{\prime} is the survival time associated with the ith unordered assessment time C_{i}^{\prime}. Assume the following conditions:
F\mbox{and }G\mbox{ are continuous on $\Re^{+}$ }\mbox{ and }F^{\prime}(t)=f(t% )>0,\ G^{\prime}(t)=g(t)>0.  (A.1) 
Note that (A.1) concerns the derivative of F and G at the single point t.
Go backwards in time from t to find the 1st C^{\prime} (the one closest to t among those less than t), 2nd C^{\prime} (the one second closest to t among those less than t),\ldots,m_{n}th C^{\prime}. Let a_{n} denote the m_{n}th such point, or 0 if there are fewer than m_{n} C^{\prime}s less than t. Note that a_{n} is a function of the C_{i}^{\prime}s, hence is a random variable, but m_{n} is nonrandom. Assume the following:
m_{n}\rightarrow\infty,\ \ m_{n}/n\rightarrow 0.  (A.2) 
These conditions imply
a_{n}\rightarrow t\mbox{ almost surely.}  (A.3) 
We are interested in conditional distribution functions given assessment times. Let {\cal C}^{\prime}_{\infty}=\{C_{1}^{\prime},C_{2}^{\prime},\ldots\}, and let {\cal C}^{\prime}_{[a_{n},t]} be the collection of C_{1}^{\prime},C_{2}^{\prime},\ldots,C_{n}^{\prime} that are in the interval [a_{n},t]. Given {\cal C}^{\prime}_{\infty}, the sum Y_{a_{n}}^{t} of indicators of death by the times {\cal C}^{\prime}_{[a_{n},t]} are independent Bernoulli’s with probability parameters F(c_{i}^{\prime}), where each c_{i}^{\prime} is the realized value of C_{i}^{\prime} among those in {\cal C}^{\prime}_{[a_{n},t]}. With probability 1, the number of those independent Bernoulli’s is m_{n} for all sufficiently large n. We try to approximate the conditional distribution of Y_{a_{n}}^{t} given {\cal C}^{\prime}_{\infty} by a binomial with parameters \{m_{n},F(t)\}.
Note that, given {\cal C}^{\prime}_{\infty}, Y_{a_{n}}^{t} is stochastically larger than or equal to a sum of m_{n} iid Bernoullis with probability parameter F(a_{n}), and stochastically smaller than or equal to a sum of m_{n} iid Bernoullis with parameter F(t). That is, given {\cal C}^{\prime}_{\infty}, Y_{a_{n}}^{t} is stochastically between a binomial random variable W_{m_{n}}^{\prime}={\rm bin}\{m_{n},F(a_{n})\} and a binomial random variable W_{n}={\rm bin}\{m_{n},F(t)\}. Therefore, we seek necessary and sufficient conditions for m_{n} such that the conditional distributions of
Z_{n}={W_{m_{n}}m_{n}F(t)\over\sqrt{m_{n}F(t)\{1F(t)\}}}\mbox{ and }Z_{n}^{% \prime}={W_{m_{n}}^{\prime}m_{n}F(t)\over\sqrt{m_{n}F(t)\{1F(t)\}}}  (A.4) 
given {\cal C}^{\prime}_{\infty} both converge to standard normals as n\rightarrow\infty. The result for Z_{n} follows immediately from the ordinary CLT, so we need only find necessary and sufficient conditions under which the result holds for Z_{n}^{\prime}.
Write
\displaystyle Z_{n}^{\prime}  \displaystyle=  \displaystyle{W_{m_{n}}^{\prime}m_{n}F(a_{n})\over\sqrt{m_{n}F(a_{n})\{1F(a_% {n})\}}}\sqrt{{F(a_{n})\{1F(a_{n})\}\over F(t)\{1F(t)\}}}+{m_{n}\{F(a_{n})F% (t)\}\over\sqrt{m_{n}F(t)\{1F(t)\}}}  (A.7)  
\displaystyle=  \displaystyle{W_{m_{n}}^{\prime}m_{n}F(a_{n})\over\sqrt{m_{n}F(a_{n})\{1F(a_% {n})\}}}\sqrt{{F(a_{n})\{1F(a_{n})\}\over F(t)\{1F(t)\}}}+{\sqrt{m_{n}}\{F(a% _{n})F(t)\}\over\sqrt{F(t)\{1F(t)\}}}  (A.8) 
Conditioned on {\cal C}^{\prime}_{\infty}, the a_{n} are fixed constants. The conditional characteristic function of Z_{n}^{\prime} given C_{1}^{\prime}=c_{1}^{\prime},C_{2}^{\prime}=c_{2}^{\prime},\ldots is
\psi_{n}(t)\exp\left\{it{\sqrt{m_{n}}\{F(a_{n})F(t)\}\over\sqrt{F(t)\{1F(t)% \}}}\right\}, 
where \psi_{n}(t) is the characteristic function of the left term of (A.8). By the initial assumptions (A.2), the set of \omega for which m_{n}F\{a_{n}(\omega)\}[1F\{a_{n}(\omega)\}]\rightarrow\infty has probability 1. For each such \omega, the conditional distribution of
{W_{m_{n}}^{\prime}m_{n}F(a_{n})\over\sqrt{m_{n}F(a_{n})\{1F(a_{n})\}}} 
given C_{1}^{\prime}=c_{1}^{\prime},C_{2}^{\prime}=c_{2}^{\prime},\ldots satisfies the Lindeberg condition, and therefore converges in distribution to a standard normal [see Example 8.15 of 18]. This, coupled with (A.3) and Slutsky’s theorem, implies that the conditional distribution of the left term of (A.8), given C_{1}^{\prime}=c_{1}^{\prime},C_{2}^{\prime}=c_{2}^{\prime},\ldots converges to a standard normal. Accordingly, its conditional characteristic function \psi_{n}(t) converges to \exp(t^{2}/2) as n\rightarrow\infty. The conditional characteristic function of Z_{n}^{\prime} given C_{1}^{\prime}=c_{1}^{\prime},C_{2}^{\prime}=c_{2}^{\prime},\ldots converges to \exp(t^{2}/2) (namely that of a standard normal) if and only if
\exp\left\{it{\sqrt{m_{n}}\{F(a_{n})F(t)\}\over\sqrt{F(t)\{1F(t)\}}}\right\}% \rightarrow 1, 
which occurs if and only if m_{n}^{1/2}\{F(a_{n})F(t)\}\rightarrow 0. But
\sqrt{m_{n}}\{F(a_{n})F(t)\}=\left({F(a_{n})F(t)\over a_{n}t}\right)\sqrt{m% _{n}}(a_{n}t), 
and \{F(a_{n})F(t)\}/(a_{n}t)\rightarrow f(t)>0 by assumption (A.1). Therefore, m_{n}^{1/2}\{F(a_{n})F(t)\}\rightarrow 0 if and only if m_{n}^{1/2}(ta_{n})\rightarrow 0.
We have shown that, under assumptions (A.1) and (A.2), the conditional distributions of Z_{n}^{\prime} and Z_{n} given C_{1}^{\prime}=c_{1}^{\prime},C_{2}^{\prime}=c_{2}^{\prime},\ldots both converge to standard normals as n\rightarrow\infty if and only if m_{n}^{1/2}(a_{n}t) converges almost surely to 0.
We show next that m_{n}^{1/2}(a_{n}t) converges almost surely to 0 if and only if m_{n}/n^{2/3}\rightarrow 0 as n\rightarrow\infty. Assume first that m_{n}/n^{2/3}\rightarrow 0. The same argument as above, but applied to G instead of F, shows that under conditions (A.1) and (A.2), m_{n}^{1/2}(a_{n}t)\rightarrow 0 if and only if m_{n}^{1/2}\{G(a_{n})G(t)\}\rightarrow 0. We will, therefore, demonstrate that m_{n}^{1/2}\{G(a_{n})G(t)\}\rightarrow 0 almost surely.
It suffices to show that
P[m_{n}^{1/2}\{G(t)G(a_{n})\}>\epsilon\mbox{ i.o.}]=0  (A.9) 
for each \epsilon>0 (where i.o. means infinitely often, i.e., for infinitely many n). By the Borel Cantelli lemma, we need only show that
\sum_{n=1}^{\infty}P(E_{n})<\infty,  (A.10) 
where
E_{n}\mbox{ is the event that }m_{n}^{1/2}\{G(t)G(a_{n})\}>\epsilon.  (A.11) 
Note that E_{n} occurs if and only if fewer than m_{n} of G(C_{1}^{\prime}),G(C_{2}^{\prime}),\ldots,G(C_{n}^{\prime}) lie in the interval [G(t)\epsilon/m_{n}^{1/2},G(t)]. Each G(C_{i}^{\prime}) follows a uniform distribution, so the number N_{n} of G(C_{1}^{\prime}),\ldots,G(C_{n}^{\prime}) in the interval [G(t)\epsilon/m_{n}^{1/2},G(t)] has a binomial distribution with parameters (n,\epsilon/m_{n}^{1/2}).
[22] shows that for a binomial (n,p) random variable X, if x\leq np, then P(X\geq x)\geq 1\Phi[(xnp)/\{np(1p)\}^{1/2}]. Equivalently, P(X<x)\leq\Phi[(xnp)/\{np(1p)\}^{1/2}]. We can apply this to conclude that when m_{n}<n\epsilon/m_{n}^{1/2} (which it will be for large n because m_{n}/n^{2/3}\rightarrow 0),
P(N_{n}<m_{n})\leq\Phi\left[{m_{n}{n\epsilon\over\sqrt{m_{n}}}\over\sqrt{n% \left({\epsilon\over\sqrt{m_{n}}}\right)\left(1{\epsilon\over\sqrt{n}}\right)% }}\right]. 
It is known that 1\Phi(x)\leq\phi(x)/x for x\geq 0 [see section 11.11.2 of 18], so by symmetry, \Phi(x)\leq\phi(x)/x for x\leq 0 as well. In our case, x=x_{n}, where
x_{n}={m_{n}{n\epsilon\over\sqrt{m_{n}}}\over\sqrt{n\left({\epsilon\over\sqrt% {m_{n}}}\right)\left(1{\epsilon\over\sqrt{m_{n}}}\right)}}. 
Therefore, it suffices to show that \sum_{n=1}^{\infty}\phi(x_{n})/x_{n}<\infty. But if n\rightarrow\infty and m_{n}/n^{2/3}\rightarrow 0, then x_{n}\rightarrow\infty. Accordingly, we can ignore the denominator of \phi(x_{n})/x_{n} and show that \sum_{n=1}^{\infty}\phi(x_{n})<\infty.
To show that
\sum_{n=1}^{\infty}{\phi\left[{m_{n}{n\epsilon\over\sqrt{m_{n}}}\over\sqrt{n% \left({\epsilon\over\sqrt{m_{n}}}\right)\left(1{\epsilon\over\sqrt{m_{n}}}% \right)}}\right]}<\infty, 
write the nth term d_{n} of the sum as
\displaystyle d_{n}  \displaystyle=  \displaystyle(2\pi)^{1/2}\exp\left\{{{n^{2}\over 2m_{n}}\left(\epsilon{m_{n% }^{3/2}\over n}\right)^{2}\over{n\epsilon\over\sqrt{m_{n}}}\left(1{\epsilon% \over\sqrt{m_{n}}}\right)}\right\}=(2\pi)^{1/2}\exp\left\{{{n\over 2\sqrt{m_% {n}}}\left(\epsilon{m_{n}^{3/2}\over n}\right)^{2}\over\epsilon\left(1{% \epsilon\over\sqrt{m_{n}}}\right)}\right\}  (A.12)  
\displaystyle=(2\pi)^{1/2}\exp\left\{{(1/2)\left({n^{2/3}\over m_{n}}\right)% \sqrt{m_{n}}n^{1/3}\left(\epsilon{m_{n}^{3/2}\over n}\right)^{2}\over\epsilon% \left(1{\epsilon\over\sqrt{m_{n}}}\right)}\right\}.  (A.13) 
The denominator inside the exponent is no greater than \epsilon, while n^{2/3}/m_{n}\rightarrow\infty, m_{n}^{1/2}\rightarrow\infty, and (\epsilonm_{n}^{3/2}/n)\rightarrow\epsilon as n\rightarrow\infty. It follows that what is inside the exponent is at most
\exp(\lambda n^{1/3}) 
for all n sufficiently large, where \lambda>0. Now apply the integral test for infinite sums to conclude that \sum_{n=1}^{\infty}\exp(\lambda n^{1/3})<\infty because \int_{1}^{\infty}\exp(\lambda x^{1/3})dx<\infty. We have demonstrated condition (A.10). By the Borel Cantelli lemma, (A.9) holds.
We have shown that
\displaystyle m_{n}/n^{2/3}\rightarrow 0  \displaystyle\Rightarrow  \displaystyle m_{n}^{1/2}\{G(a_{n})G(t)\}\rightarrow 0\mbox{ a.s}  (A.14)  
\displaystyle\Rightarrow  \displaystyle m_{n}^{1/2}(a_{n}t)\rightarrow 0\mbox{ a.s}  (A.15)  
\displaystyle\Rightarrow  \displaystyle\mbox{given }{\cal C}^{\prime}_{\infty},\mbox{ the conditional % distribution functions of }Z_{n}\mbox{ and }Z_{n}  (A.17)  
\displaystyle\mbox{of }(\ref{ZnZnprime})\mbox{ both converge to standard % normals a.s.} 
For the reverse direction, suppose that m_{n}/n^{2/3} does not converge to 0. Then there is some subsequence \{K\}\subset\{1,2,\ldots\} such that m_{k}/k^{2/3} converges to either a positive number or +\infty for k\in\{K\},\ k\rightarrow\infty. We shall show that in either case, (A.9) cannot hold. With E_{n} defined by (A.11), along the subsequence \{K\}, we have
\displaystyle P(E_{k}\mbox{ i.o.})  \displaystyle=  \displaystyle P\left[\cap_{j\in\{K\}}\cup_{r\geq j,\,r\in\{K\}}E_{r})\right]  (A.18)  
\displaystyle=  \displaystyle\lim_{j\rightarrow\infty,\,j\in\{K\}}P(\cup_{r\geq j,\,r\in\{K\}}% E_{r})  (A.19)  
\displaystyle\geq  \displaystyle\liminf_{j\rightarrow\infty,\,j\in\{K\}}P(E_{j})  (A.20) 
Also, m_{j}/j^{2/3} converges to a positive constant or +\infty as j\rightarrow\infty along the subsequence \{K\}, so m_{j}^{3/2}/j converges to B, where B is a positive constant or +\infty. This implies that for \epsilon<B, m_{j}j\epsilon/m_{j}^{1/2}\geq 0 for all sufficiently large j. Consequently, for \epsilon<B, P(E_{j})\geq 1/2 for all sufficiently large j. By inequality (A.20), P(E_{k}\ \mbox{i.o.})\geq 1/2. This certainly precludes (A.9). This completes the proof that if m_{n}/n^{2/3} does not converge to 0, the conditional distribution of Z_{n} and Z_{n} given {\cal C}^{\prime}_{\infty} cannot both converge to standard normals almost surely as n\rightarrow\infty.
We have shown that the conditional distributions of Z_{n}^{\prime} and Z_{n} given {\cal C}^{\prime}_{\infty} both converge to standard normals a.s. as n\rightarrow\infty if and only if m_{n}/n^{2/3}\rightarrow 0.
Acknowledgements
We thank the reviewers in advance for their comments.
SUPPLEMENTARY MATERIALS (Valid and Approximately Valid Confidence Intervals for Current Status Data) \slink[url]DOI: .pdf \sdescriptionThe supplement contains mathematical details and figures.
References
 [1] Banerjee, Moulinath and Wellner, Jon A. (2001). Likelihood ratio tests for monotone functions. Ann. Statist.. 29 1699–1731. \MR1891743
 [2] Banerjee, Moulinath and Wellner, Jon A. (2005). Confidence intervals for current status data. Scand. J. Statist.. 32 405–424. \MR2204627
 [3] Banerjee, Moulinath and Wellner, Jon A. (2005). Score statistics for current status data: comparisons with likelihood ratio and Wald statistics. Int. J. Biostat.. 1 Art. 3, 29. \MR2232228
 [4] Cardano, Girolamo (1993). Ars magna or The rules of algebra. Dover Publications, Inc., New York. \MR1254210
 [5] Choi, Byeong Yeob and Fine, Jason P. and Brookhart, M. Alan (2013). Practicable confidence intervals for current status data. Stat. Med.. 32 1419–1428. \MR3045910
 [6] Clopper, CJ and Pearson, Egon S (1934). The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika. 26 404–413.
 [7] Ferguson, Thomas S. (1996). A course in large sample theory. Chapman & Hall, London. \MR1699953
 [8] Groeneboom, Piet and Jongbloed, Geurt and Witte, Birgit I. (2010). Maximum smoothed likelihood estimation and smoothed maximum likelihood estimation in the current status model. Ann. Statist.. 38 352–387. \MR2589325
 [9] Groeneboom, Piet and Jongbloed, Geurt (2014). Nonparametric estimation under shape constraints 38. Cambridge University Press, New York. \MR3445293
 [10] [Groeneboom (1992)] Groeneboom, Piet and Wellner, Jon A. (1992). Information bounds and nonparametric maximum likelihood estimation 19. Birkhäuser Verlag, Basel. \MR1180321
 [11] Groeneboom, Piet and Wellner, Jon A. (2001). Computing Chernoff’s distribution. J. Comput. Graph. Statist.. 10 388–400. \MR1939706
 [12] Hjort, Nils Lid and McKeague, Ian W. and Van Keilegom, Ingrid (2009). Extending the scope of empirical likelihood. Ann. Statist.. 37 1079–1111. \MR2509068
 [13] Johnson, Norman L. and Kotz, Samuel and Balakrishnan, N. (1995). Continuous univariate distributions. Vol. 2, 2nd ed. John Wiley & Sons, Inc., New York. \MR1326603
 [14] Keiding, Niels (1991). Agespecific incidence and prevalence: a statistical perspective. J. Roy. Statist. Soc. Ser. A. 154 371–412. \MR1144166
 [15] Kim, S. and Fay, Michael P. and Proschan, Michael A. (2018). Supplement to “Valid and Approximately Valid Confidence Intervals for Current Status Data.” DOI:to be added later.
 [16] Lancaster, HO (1952). Statistical control of counting experiments. Biometrika. 419–422.
 [17] Lancaster, HO (1961). Significance tests in discrete distributions. J. Amer. Statist. Assoc.. 56 223–234.
 [18] Proschan, Michael A. and Shaw, Pamela A. (2016). Essentials of probability theory for statisticians. CRC Press, Boca Raton, FL. \MR3468626
 [19] Sen, Bodhisattva and Xu, Gongjun (2015). Model based bootstrap methods for interval censored data. Comput. Statist. Data Anal.. 81 121–129. \MR3257405
 [20] Shaked, Moshe and Shanthikumar, George (2007). Stochastic orders. Springer Science & Business Media.
 [21] Silverman, B. W. (1986). Density estimation for statistics and data analysis 26. Chapman & Hall, London. \MR848134
 [22] Slud, Eric V. (1977). Distribution inequalities for the binomial law. Ann. Probability. 5 404–412. \MR0438420
 [23] Tang, Runlong and Banerjee, Moulinath and Kosorok, Michael R. (2012). Likelihood based inference for current status data on a grid: a boundary phenomenon and an adaptive inference procedure. Ann. Statist.. 40 45–72. \MR3013179
 [24] Whitt, Ward (2006). Stochastic ordering. Encyclopedia of statistical sciences, 2nd ed. 13 8260–8264.
Valid and Approx. C.I.s for Current Status Data
, and
Appendix S1 A Specific Form of a^{*} and b^{*}
When the lower confidence limit exceeds the upper confidence limit, we abandon using separate proportions for the lower and upper intervals. Instead, we use a single proportion for the m/2 observations less than, and m/2 observations greater than, t. To be precise, we do the following. If G is discrete, then we define a^{*} and b^{*} to be
\begin{array}[]{l}a^{*}\equiv a^{*}(t,n,\mathbb{C})=\left\{\begin{array}[]{l l% }0&\quad\text{if $C_{\lceil(mJ)/2\rceil}\geq C_{g}$ }\\ C_{l\lceil(mJ)/2\rceil+1J}&\quad\text{if $C_{\lceil(mJ)/2\rceil}<C_{g}$ % and $m>J$}\\ t&\quad\text{if $C_{\lceil(mJ)/2\rceil}<C_{g}$ and $J\geq m$};\end{array}% \right.\\ $~{}$\\ b^{*}\equiv b^{*}(t,n,\mathbb{C})=\left\{\begin{array}[]{l l}\infty&\quad\text% {if $C_{n\lceil(mJ)/2\rceil+1}\leq C_{l}$ }\\ C_{g+\lceil(mJ)/2\rceil1+J}&\quad\text{if $C_{n\lceil(mJ)/2\rceil+1}>C_{l}% $ and $m>J$}\\ t&\quad\text{if $C_{n\lceil(mJ)/2\rceil+1}>C_{l}$ and $J\geq m$}\end{array}% \right.\end{array}  (S1.1) 
where J is the number of observations at t, and \lceil(mJ)/2\rceil is the smallest integer greater than or equal to (mJ)/2. If G is continuous, then we define a^{*} and b^{*} to be
\begin{array}[]{l}a^{*}\equiv a^{*}(t,n,\mathbb{C})=\left\{\begin{array}[]{l l% }0&\quad\text{if $C_{\lceil m/2\rceil}>t$ }\\ C_{l\lceil m/2\rceil+1}&\quad\text{if $C_{\lceil m/2\rceil}\leq t$};\end{% array}\right.\\ $~{}$\\ b^{*}\equiv b^{*}(t,n,\mathbb{C})=\left\{\begin{array}[]{l l}\infty&\quad\text% {if $C_{n\lceil m/2\rceil+1}<t$ }\\ C_{g+\lceil m/2\rceil1}&\quad\text{if $C_{n\lceil m/2\rceil+1}\geq t$}\end{% array}\right.\end{array}  (S1.2) 
where \lceil m/2\rceil is the smallest integer greater than or equal to m/2.
Appendix S2 midP Binomial Intervals
In general, the ClopperPearson interval for the binomial is conservative, and the actual confidence level exceeds the nominal confidence level (1\alpha) for almost all values of the parameter in order not to be less than (1\alpha) for any. To eliminate the conservativeness, one approach is to use the midP method [27]. For discrete data, a valid pvalue can be calculated as the probability (maximized under the null hypothesis model) of observing equal or more extreme data. The midP value slightly adjusts this and is the probability of observing more extreme data plus half the probability of observing equally extreme data. The midP 100(1\alpha)\%>50\% confidence limits for a binomial parameter, \theta, assuming Y\sim\text{Binomial}(N,\theta) can be found by solving equations for fixed y and n:
U_{mid}(1\alpha/2;y,n)=\left\{\begin{array}[]{ll}\theta:\text{Pr}(Y<y;\theta)% +(.5)\text{Pr}(Y=y;\theta)=\alpha/2&\mbox{if $y<n$};\\ 1&\mbox{if $y=n$},\end{array}\right. 
and
L_{mid}(1\alpha/2;y,n)=\left\{\begin{array}[]{ll}\theta:\text{Pr}(Y>y;\theta)% +(.5)\text{Pr}(Y=y;\theta)=\alpha/2&\mbox{if $y>0$};\\ 0&\mbox{if $y=0$}.\end{array}\right. 
Appendix S3 SMLE of F
[25] defined the SMLE \hat{F}_{n}(t) for the true F(t) by
\hat{F}^{SM}_{n}(t)=\int K_{h}(tu)d\hat{F}_{n}(u);  (S3.1) 
the SMLE \hat{f}_{n}(t) for the true f(t) by
\hat{f}^{SM}_{n}(t)=\int k_{h}(tu)d\hat{F}_{n}(u),  (S3.2) 
where k is a triweight kernel which is symmetric and twice continuously differentiable on [1,1], K(t)=\int^{t}_{\infty}k(u)du, K_{h}(u)=K(u/h), k_{h}(u)=(1/h)k(u/h), \hat{F}_{n}(u) is the nonparametric maximum likelihood estimator (NPMLE), and h>0 is the bandwidth.
Theorem~{}4.2 in [25] showed that for fixed t>0, the asymptotic mean squared error (aMSE)optimal value of h for estimating F(t) is given by h_{n,F}=c_{F}n^{1/5}, where
c_{F}=\left[\frac{F(t)\{1F(t)\}}{g(t)}\int k(u)^{2}du\right]^{1/5}\times\left% [\left\{\int u^{2}k(u)du\right\}^{2}f^{\prime}(t)^{2}\right]^{1/5},  (S3.3) 
f^{\prime}(t) is the first derivative of f(t). However the aMSE depends on the unknown distribution F, so c_{F} and h_{F} are unknown. Therefore we cannot use (S3.3) for estimating F(t) in practice.
To overcome this problem, [25] introduced the smoothed bootstrap for F(t). They set the initial choice of the bandwidth, h_{0}=c_{0}n^{1/5} for F(t), then sampled m^{\prime} observations (m^{\prime}\leq n) from the distribution SMLE \hat{F}_{n,h_{0}}^{SM}. They determined the estimator \hat{F}_{n,cm^{1/5}}^{SM}, then repeated B times (they set B=500), and estimated aMSE(c) by
\widehat{MSE}_{B}(c)=B^{1}\sum^{B}_{i=1}\left(\hat{F}_{n,cm^{1/5}}^{SM,i}(t)% \hat{F}_{n,h_{0}}^{SM}(t)\right)^{2}. 
They defined \hat{c}_{F,SM} as the minimizer of \widehat{MSE}_{B}(c) and then estimated the optimal bandwidth by \hat{h}_{n,F,SM}=\hat{c}_{F,SM}n^{1/5}.
In this paper, we estimate f^{\prime}(t), then estimate F(t) and f(t) without utilizing bootstrap sampling or Monte Carlo simulation. [25] used the triweight kernel k(t)=(35/32)(1t^{2})^{3}1_{[1,1]}(t), but we use the Gaussian kernel for F(t) and f(t). Other wellknown kernels are also applicable to estimate F(t) and f(t). We also estimate g(t) by the kernel density estimation with the Gaussian kernel.
To estimate g(t), we use the bandwidth recommended by [28]:
\hat{h}_{n,g}=.9~{}\text{min}(s,IQR/1.34)n^{1/5} 
where s and IQR are the sample standard deviation and sample interquartile range of the C_{i} values. Then the initial F(t), \hat{F}^{\text{Initial}}(t), is estimated by (S3.1) and f^{\prime}(t) is estimated by
\hat{f}^{\prime}_{n}(t)=\int k^{\prime}_{h}(tu)d\hat{F}_{n}(u),  (S3.4) 
with the initial h, say \hat{h}^{\text{Initial}}_{n,F} set to h_{n,g}, where k^{\prime}_{h}(u)=(1/h^{2})k^{\prime}(u/h) and k^{\prime}(u) is the first derivative of k(u). Then \hat{c}^{\text{New}}_{n,F} and \hat{h}^{\text{New}}_{n,F} are calculated by substituting \hat{F}^{\text{Initial}}(t), \hat{f}^{\prime}_{n}(t) and \hat{g}(t) into (S3.3). Note that if \hat{F}^{\text{Initial}}(t)=0 or 1, \hat{f}^{\prime}_{n}(t)=0, or \hat{g}(t)=0, then (S3.3) is zero or undefined. Therefore, we need to modify them such as (0+\epsilon)\leq\hat{F}^{\text{Initial}}(t)\leq(1\epsilon), \{\hat{f}^{\prime}_{n}(t)\}^{2}\geq\epsilon, and \hat{g}(t)\geq\epsilon where \epsilon is a small positive value. We modify them such that if \hat{F}^{\text{Initial}}(t)\leq .01, set \hat{F}^{\text{Initial}}(t)=.01, if \hat{F}^{\text{Initial}}(t)\geq .99, set \hat{F}^{\text{Initial}}(t)=.99, if \{\hat{f}^{\prime}_{n}(t)\}^{2}\leq 10^{3}, set \{\hat{f}^{\prime}_{n}(t)\}^{2}=10^{3}, and if \hat{g}(t)\leq 10^{4}, set \hat{g}(t)=10^{4}. With \hat{h}^{\text{New}}_{n,F}, F(t) is estimated again by (S3.1), and f(t) is estimated by (S3.2).
We do not iterate this process until convergence, because the iteration of this process does not guarantee convergence to the true values. We also performed [25]’s smoothed bootstrap with h_{0}=\hat{h}^{\text{Initial}}_{n,F} and compared with our method. The two methods showed very similar estimates, but our method is much faster than smoothed bootstrapping, so we do not present the latter method.
An additional practical adjustment was needed. Note that if \hat{F}^{SM}_{n}(t)=0 or 1, \hat{f}^{SM}_{n}(t)=0, or \hat{g}(t)=0, then m^{\dagger*}_{n} is zero or undefined. Therefore, we modify them such that if \hat{F}^{SM}_{n}(t) \leq .01, set \hat{F}^{SM}_{n}(t)=.01, if \hat{F}^{SM}_{n}(t) \geq .99, set \hat{F}^{SM}_{n}(t)=.99, if \hat{f}^{SM}_{n}(t)\leq 10^{4}, set \hat{f}^{SM}_{n}(t)=10^{4}, and if \hat{g}(t)\leq 10^{4}, set \hat{g}(t)=10^{4}.
Appendix S4 Figures
In the following pages are supplemental figures.
F(t)=\left\{\begin{array}[]{l l}t&\quad\text{for $t\leq.25$ };\\ .25+(20,000)(t.25)^{2}&\quad\text{for $.25<t\leq\left(.25+\frac{1}{200}\right% )$};\\ .75+\frac{.25}{(.75\frac{1}{200})}(t.25\frac{1}{200})&\quad\text{for $\left% (.25+\frac{1}{200}\right)<t\leq 1$}.\end{array}\right. 
References
 [25] Groeneboom, Piet and Jongbloed, Geurt and Witte, Birgit I. (2010). Maximum smoothed likelihood estimation and smoothed maximum likelihood estimation in the current status model. Ann. Statist.. 38 352–387. \MR2589325
 [26] Groeneboom, Piet and Jongbloed, Geurt (2014). Nonparametric estimation under shape constraints 38. Cambridge University Press, New York. \MR3445293
 [27] Lancaster, HO (1961). Significance tests in discrete distributions. J. Amer. Statist. Assoc.. 56 223–234.
 [28] Silverman, B. W. (1986). Density estimation for statistics and data analysis 26. Chapman & Hall, London. \MR848134