1 Introduction

Varying-coefficient modeling via

regularized basis functions

Hidetoshi Matsui, Toshihiro Misumi and Shuichi Kawano

Nikon Systems Inc.,

1-6-3, Nishiohi, Shinagawa-ku, Tokyo 140-0015, Japan,

Astellas Pharma Inc.,

3-17-1, Hasune, Itabashi-ku, Tokyo 174-8612, Japan,

Department of Mathematical Sciences, Osaka Prefecture University,

1-1 Gakuen-cho, Sakai, Osaka 599-8531, Japan.

h.matsui@nikon-sys.co.jp (H. Matsui)

toshihiro.misumi@jp.astellas.com (T. Misumi)

skawano@ms.osakafu-u.ac.jp (S. Kawano)

Abstract:  We address the problem of constructing varying-coefficient models based on basis expansions along with the technique of regularization. A crucial point in our modeling procedure is the selection of smoothing parameters in the regularization method. In order to choose the parameters objectively, we derive model selection criteria from the viewpoints of information-theoretic and Bayesian approach. We demonstrate the effectiveness of proposed modeling strategy through Monte Carlo simulations and analyzing a real data set.

Key Words and Phrases: Basis expansion, Model selection, Regularization, Varying-coefficient model.

## 1 Introduction

Longitudinal data are encountered in various fields, e.g., medical research, economic science and so on. In the setting of longitudinal study, the outcome data are measured repeatedly over time for each individual. Nowadays many modeling strategies have been studied for analyzing longitudinal data, both in parametric and nonparametric way (see, for example, Diggle et al., 1994; Vonesh and Chinchilli, 1997).

Under linear parametric modeling for longitudinal data, a linear mixed-effect model is widely used in several literatures (see, for example, Verbeke and Molenberghs, 2000). The advantage of linear mixed-effect model is easy to handle the unbalanced data, which are highly occurred in the longitudinal study. Meanwhile, nonparametric regression (Ruppert et al., 2003) and functional data analysis (Ramsay and Silverman, 2005) has come to the front recently for the nonparametric approach toward longitudinal data analysis.

They can capture the complex structure in the longitudinal data effectively. While these parametric or nonparametric approaches are very useful, there are some problems about the adequacy of the model assumptions and the potential impact of model mis-specifications on the analysis, which is especially arisen in parametric models (Hoover et al., 1998). In addition, it is not unusual that covariates may depend on time progresses. Nevertheless, these approaches does not necessarily consider studying an association between covariates and a response with time.

One of the most useful model to overcome this problem is the varying-coefficient model (VCM). Hastie and Tibshirani (1993) proposed the smoothing spline method for estimating VCMs. The essential idea behind the VCM is that coefficients of regression model are represented as time-dependent function. It enables us to study the association between the time varying covariates and outcome. Hoover et al. (1998) presented two types of nonparametric estimation procedure for VCMs, smoothing spline and locally weighted polynomials. They used a cross validation for selecting smoothing parameters in smoothing spline method. However, the cross validation requires large computational time and yields the high variability since the selector is repeatedly applied.

In this paper, we introduce a nonlinear varying-coefficient modeling strategy using a linear combination of basis functions and regularized likelihood estimation method for continuous longitudinal data. We also note that adjusted parameters included in our proposed model are regularization parameters. In order to choose these parameters, we derive model selection criteria from information-theoretic and Bayesian viewpoints. The proposed nonlinear varying coefficient modeling procedure is investigated through the analysis of real data and Monte Carlo simulations.

The article is organized as follows. In Section 2 and 3, we present the varying coefficient model based on basis expansion and its estimation by the maximum penalized likelihood method. Section 4 provides model selection criteria derived from the information-theoretic and Bayesian approach. In Section 5 we describe Monte Carlo simulations in order to examine the effectiveness of our modeling procedure, and then we also apply the proposed method to Multicenter AIDS Cohort Study data in Section 6. Summary and discussion are given in Section 7.

## 2 Varying-coefficient models

Suppose we have sets of predictors and a response varying with time , and denote -th observations at time points as , and , respectively. Then the varying-coefficient model has the form (Hoover et al., 1998)

 yij=β0(tij)+xij1β1(tij)+⋯+xijpβp(tij)+εij, (1)

where are parameter functions and are random noises whose vector are normally distributed with mean vector and a variance covariance matrix .

We assume that coefficient functions are expressed by basis expansions as follows;

 βk(tij)=Mk∑m=1γkmϕ(k)m(tij)=\boldmath{γ}′k\boldmath{ϕ}(k)(tij),

where are parameters to be estimated and are basis functions. There are various kinds of basis functions such as radial basis functions (Bishop, 1995; Ando et al., 2008) or wavelets (Donoho and Johnstone, 1994; Fujii and Konishi, 2006). In this paper we apply -spline bases. Details of -splines are referred to de Boor (2001) and Imoto and Konishi (2003).

Using the above assumption and denoting , and , the varying coefficient model (1) is rewritten as

 \boldmath{y}i=p∑k=0DikΦik% \boldmath{γ}k+\boldmath{ε}i,% \boldmath{ε}i∼Nni(\boldmath{0},Σi). (2)

Then the varying coefficient model has the statistical model

 f(Y|\boldmath{θ}) =n∏i=11(2π)ni/2|Σi|1/2 ×exp⎧⎨⎩−12(\boldmath{y}i−p∑k=0DikΦik\boldmath{γ}k)′Σ−1i(\boldmath{y}i−p∑k=0DikΦik\boldmath{γ}k)⎫⎬⎭, (3)

where is a vector of unknown parameters.

## 3 Estimation

Unknown parameters, such as coefficient vectors and variance covariance matrices are estimated by the maximum penalized likelihood method, that is, maximizing the penalized likelihood defined by

 lλ(\boldmath{θ})=l(\boldmath{θ})−n2p∑k=1λk\boldmath{γ}′kΩk\boldmath{γ}k, (4)

where is a log-likelihood function given by

 l(\boldmath{θ}) =logf(Y|\boldmath{θ}) =−∑ini2log(2π)−12n∑i=1log|Σi| −12n∑i=1(\boldmath{y}i−p∑k=0DikΦik\boldmath{γ}k)′Σ−1i(\boldmath{y}i−p∑k=0DikΦik\boldmath{γ}k) (5)

and is a positive semi-definite matrix. Moreover, are regularization parameters which control the effectiveness of the regularization.

Since it is difficult to derive estimates of parameters analytically, we apply the backfitting algorithm (Hastie and Tibshirani, 1990; 1993) for maximizing (4). The first derivative of with respect to is given by

 ∂lλ(\boldmath{θ})∂\boldmath{γ}k =n∑i=1{Φ′ikD′ikΣ−1i(\boldmath{y}i−p∑k=0DikΦik% \boldmath{γ}k)}−nλk\boldmath{γ}kΩk =n∑i=1⎧⎨⎩Φ′ikD′ikΣ−1i⎛⎝\boldmath{y}i−p∑l≠kDilΦil% \boldmath{γ}l⎞⎠⎫⎬⎭ −n∑i=1Φ′ikD′ikΣ−1iDikΦik\boldmath{γ}k−nλkΩk\boldmath{γ}k. (6)

When coefficient parameters other than -th are given, the backfitting algorithm iteratively estimates the -th coefficient as follows:

 ^\boldmath{γ}k=(n∑i=1Φ′ikD′ikΣ−1iDikΦik+nλkΩk)−1⎧⎨⎩n∑i=1Φ′ikD′ikΣ−1i⎛⎝\boldmath{y}i−p∑l≠kDilΦil^%\boldmath$γ$l⎞⎠⎫⎬⎭ (7)

for . When variance covariance matrices are given in the form of with an unknown parameter and known matrices , the parameter is estimated by

 ^σ2=1nn∑i=1⎛⎝\boldmath{y}i−p∑l≠kDilΦil^\boldmath{γ}l⎞⎠′S−1i⎛⎝\boldmath{y}i−p∑l≠kDilΦil^\boldmath{γ}l⎞⎠. (8)

Then we have the statistical model

 f(Y|^\boldmath{θ}) =n∏i=11(2π^σ2)ni/2|Si|1/2 ×exp⎧⎨⎩−12(\boldmath{y}i−p∑k=0DikΦik^\boldmath{γ}k)′^σ−2S−1i(\boldmath{y}i−p∑k=0DikΦik^\boldmath{γ}k)⎫⎬⎭. (9)

## 4 Model selection criteria

The varying-coefficient model estimated by the maximum penalized likelihood method depends on regularization parameters. Smaller value of them yields overfitted estimates for the data, while the larger value provides models which does not capture the structure. Therefore it is important to select appropriate values of them. We consider using model selection criteria derived from the information-theoretic and Bayesian approach.

Konishi and Kitagawa (1996) derived an information criterion GIC for evaluating models given by the M-estimate including maximum penalized likelihood method. Using this result, the GIC for evaluating the varying-coefficient model estimated by the maximum penalized likelihood described above is given by

 GIC=−2l(^\boldmath{θ})+2tr{R−1(^\boldmath{θ})Q(^\boldmath{θ})}, (10)

where and are, respectively, given by

 R(^\boldmath{θ})=−1nn∑i=1∂2{l(i)λ(\boldmath{θ})}∂\boldmath{θ}∂\boldmath{θ}T∣∣ ∣∣θ=^θ,  Q(^\boldmath{θ})=1nn∑i=1∂{l(i)λ(\boldmath{θ})}∂\boldmath{θ}∂{l(i)(%\boldmath$θ$)}∂\boldmath{θ}T∣∣ ∣∣θ=^θ, (11)

where with the log-likelihood function of the -th subject . The detailed elements of and are given in the Appendix.

Konishi et al. (2004) extended the Schwarz’s BIC (Schwarz, 1987) and derived a model selection criterion GBIC for evaluating models estimated by the penalized maximum likelihood method. The GBIC for evaluating the varying coefficient model (9) is given by

 GBIC= −2f(Y|^\boldmath{θ})+np∑k=1λk^\boldmath{γ}′kΩk^\boldmath{γ}k−(p∑k=1rk+1)log(2π) +(p∑k=1rk+1)logn−p∑k=1log|Ωk|+log|R(^\boldmath{θ})|, (12)

where . The derivation of (12) is given in the Appendix. We select a set of regularization parameters which minimizes the value of either of these criteria and then consider the corresponding model as the optimal model.

## 5 Monte Carlo simulations

We conducted Monte Carlo simulations to examine the effectiveness of our modeling procedure. In the simulation study, we generated a data set , where , given in the following. First, time points were generated by . Second, a response and two predictors were derived as follows.

 yij=f(tij)+εij,f(tij)=xij1β1(tij)+xij2β2(tij), xij1=aicos(πtij)+bi,ai∼N(0,4),bi∼U(2,3), xij2={0,1}, β1(tij)=sin(πtij),β2(tij)=tij, εij∼N(0,σ2),σ=0.05{maxt∈[0,1]f(t)−mint∈[0,1]f(t)}.

We considered four patterns of sample sizes; i.e., , and also was generated as an integer value between 8 and 15 for different suffix number .

Based on the data set, we constructed an our varying-coefficient modeling procedure, where we use one-order (linear) -splines as basis functions, a positive semi-definite matrix are assumed to be an identity matrix and the number of basis functions for . Regularization parameters in penalized likelihood function were selected by the GIC or the GBIC. In order to investigate the efficiency of proposed modeling strategy, we compare the five-fold cross validation (CV), which is one of the most widely used in smoothing parameter selection, with the GIC and the GBIC. We repeated the procedure for 1000 times, and then obtained 1000 averages of mean squared errors , where is a predictive value.

Table 1 shows results of simulation studies and means of regularization parameters and for 1000 trials. It may be seen from the table that the models evaluated by the GIC or the GBIC are competitive or superior to those by the CV with respect to minimizing AMSE. Especially, when the sample size is small, our proposed methods seem to outperform methods by the CV.

## 6 Real data example

We applied the proposed modeling strategy to the analysis of the Multicenter AIDS Cohort Study data in order to capture the fluctuation of the percentage of the CD4 cells in the blood of the human who are infected with the Human Immunodeficiency virus (HIV). The data set contains cigarette smoking status, age at HIV infection, pre-HIV infection CD4 cell percent and the CD4 cell percentage of a subject, observed at distinct time points after HIV infection. Fan and Zhang (2000) analyzed them using functional version of the ANOVA models, and Huang et al. (2004) applied the time varying-coefficient models and then evaluated the model via the cross-validation.

We represent the relationship of variables described above by the time varying-coefficient model written by

 Yi(t)=β0(t)+Xi1β1(t)+Xi2β2(t)+Xi3β3(t)+εi(t),

where , , represents centered cigarette smoking status, age at HIV infection and pre-infection CD4 percent of the -th subject, respectively, denotes the CD4 percent of the -th subject observed at differing time points, are time varying coefficients and is the error function. The model was fitted by the maximum penalized likelihood method with linear -splines, and then it was evaluated by model selection GBIC derived in the previous section. We generated 100 sets of bootstrap samples from the data, then obtained each estimates of coefficient functions.

Figure 1 shows the result of the application of the varying coefficient modeling. Solid lines are mean coefficient functions of 100 bootstrap samples and dashed lines are pointwise 95 % confidence intervals. The results suggest as follows. (1) The CD4 data have a trend that decreases with time, especially in early time. (2) PreCD4 has a positive influence on CD4 cell percentage, but it gradually becomes weak with time. (3) Age and smoking status are less influence on the CD4 percentage. These results are quite similar to those of Huang et al. (2004). In addition we want to note that the linear splines enable us to understand the fluctuation of coefficient functions more clearly.

## 7 Concluding Remarks

In this article, we have developed a varying-coefficient model based on basis expansion approach by maximum penalized likelihood procedure. In order to choose values of regularization parameters, we have introduced model selection criteria from the information-theoretic and Bayesian viewpoints. We have applied our proposed method into some synthetic examples and Multicenter AIDS Cohort Study data. These results offers the effectiveness of our modeling strategy. Due to the stability and the predictive performance of the constructed models, our varying-coefficient modeling strategy has the potential to be useful in a variety of practical applications.

In the future work, we will extend our model to discrete response for longitudinal data by using generalized linear models.

## Appendix

### A.1 Derivation of the GBIC

We consider the prior density of as

 π(\boldmath{θ}|λ1,…,λp)=p∏k=1(2π)−Mk−rk2(nλk)Mk−rk2|Ωk|12exp{−n2λk% \boldmath{γ}kΩj\boldmath{γ}j}, (13)

where . Then the marginal likelihood function of , given regularization parameters , is given by

 p(Y|λ1,…,λp) =∫f(Y|\boldmath{θ})π(\boldmath{θ% }|λ1,…,λp)d\boldmath{θ} =∫exp[n×1nlog{f(Y|\boldmath{θ})π(\boldmath{θ}|λ1,…,λp)}]d\boldmath{θ} ≈(2π)dnd/2|R(^\boldmath{θ}% )|1/2exp[n×1nlog{f(Y|^\boldmath{θ% })π(^\boldmath{θ}|λ1,…,λp)}],

where , and the Laplace approximation is applied. Multiplying minus twice of the marginal log-likelihood function, we have

 −2logp(Y|λ1,…,λp)≈ −2f(Y|^\boldmath{θ})+np∑k=1λk^\boldmath{γ}′kΩk^\boldmath{γ}k−(p∑k=1rk+1)log(2π) +(p∑k=1rk+1)logn−p∑k=1log|Ωk|+log|R(^\boldmath{θ})|.

### A.2 Elements of matrices of the GIC

Elements Matrices in GIC, defined in (11), are given as follows:

 n∑i=1∂2l(i)λ(% \boldmath{θ})∂\boldmath{γ}k∂% \boldmath{γ}′l∣∣ ∣∣^\boldmath{θ}=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩−n∑i=1ΦikDik^σ−2S−1iDikΦik−nλkΩk,(k=l)−n∑i=1ΦikDik^σ−2S−1iDilΦil,(k≠l) n∑i=1∂2l(i)λ(% \boldmath{θ})∂\boldmath{γ}k∂σ2∣∣ ∣∣^\boldmath{θ}=−1^σ4n∑i=1ΦikDikS−1i(\boldmath{y}i−p∑r=0DirΦir^\boldmath{γ}r), n∑i=1∂2l(i)λ(% \boldmath{θ})∂σ2∂\boldmath{γ}k∣∣ ∣∣^\boldmath{θ}=⎛⎜⎝n∑i=1∂2l(i)λ(\boldmath{θ})∂% \boldmath{γ}k∂σ2∣∣ ∣∣^\boldmath{θ}⎞⎟⎠′, n∑i=1∂2l(i)λ(% \boldmath{θ})∂σ2∂σ2∣∣ ∣∣^\boldmath{θ}=−n2^σ4,
 n∑i=1∂l(i)λ(% \boldmath{θ})∂\boldmath{γ}k∂l(i)(\boldmath{θ})∂\boldmath{γ}′l∣∣ ∣∣^\boldmath{θ}=1^σ4n∑i=1ΦikDikS−1iΛ2iS−1iDilΦil−1^σ2λkΩk^\boldmath{γ% }kn∑i=1\boldmath{1}ni^ΛiS−1iDilΦil, n∑i=1∂l(i)λ(% \boldmath{θ})∂\boldmath{\boldmath{γ}k}∂l(i)(\boldmath{θ})∂σ2∣∣ ∣∣^\boldmath{θ}=−12^σ4n∑i=1Φ′ikDikS−1i^Λi\boldmath{1}% ni+12^σ6Φ′ikDikS−1i^Λ3iS−1i\boldmath{1}ni n∑i=1∂l(i)λ(% \boldmath{θ})∂σ2∂l(i)(% \boldmath{θ})∂\boldmath{\boldmath{γ}k}∣∣ ∣∣^\boldmath{θ}=⎛⎜⎝n∑i=1∂l(i)λ(\boldmath{θ})∂% \boldmath{\boldmath{γ}k}∂l(i)(% \boldmath{θ})∂σ2∣∣ ∣∣^\boldmath{θ}⎞⎟⎠′ n∑i=1∂l(i)λ(% \boldmath{θ})∂σ2∂l(i)(% \boldmath{θ})∂σ2∣∣ ∣∣^\boldmath{θ}=14^σ8n∑i=1\boldmath{1}′niS−1i^Λ4iS−1i\boldmath{1}ni−n4^σ4,

where is an -dimensional vector and .

## References

• [1] Ando, T., Konishi, S. and Imoto, S. (2008). Nonlinear regression modeling via regularized radial basis function networks. J. Statist. Plan. Infer. , 3616–3633.
• [2] Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press, Oxford.
• [3] de Boor, C. (2001). A Practical Guide to Splines (Rev. ed.). Springer.
• [4] Diggle, P. J., Liang, K. Y. and Zeger, S. L. (1994). Analysis of Longitudinal Data. Oxford University Press, Oxford.
• [5] Donoho, D. L. and Johnstone, I. M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika , 425–455.
• [6] Fan, J. and Zhang, J. T. (2000). Two-step estimation of functional linear models with applications to longitudinal data. J. Roy. Statist. Soc. Ser. B , 303–322.
• [7] Fujii, T. and Konishi, S. (2006). Nonlinear regression modeling via regularized wavelets and smoothing parameter selection. J. Multivariate Anal. , 2023–2033.
• [8] Hastie, T. and Tibshirani, R. (1990). Generalized Additive Models. Chapman & Hall, London.
• [9] Hastie, T. and Tibshirani, R. (1993). Varying-coefficient models. J. Roy. Statist. Soc. Ser. B , 757–796.
• [10] Hoover, D. R., Rice, J. A., Wu, C. O. and Yang, L. P. (1998). Nonparametric smoothing estimates of time-varying coefficient models for longitudinal data. Biometrika , 809–822.
• [11] Huang, J. Z., Wu, C. O. and Zhou, L. (2004). Polynomia spline estimation and inference for varying coefficient models with longitudinal data. Statist. Sinica , 763–788.
• [12] Imoto, S. and Konishi, S. (2003). Selection of smoothing parameters in -spline nonparametric regression models using information criteria. Ann. Inst. Statist. Math. 55, 671–687.
• [13] Konishi, S., Ando, T. and Imoto, S. (2004). Bayesian information criteria and smoothing parameter selection in radial basis function networks. Biometrika 91, 27–43.
• [14] Konishi, S. and Kitagawa, G. (1996). Generalised information criteria in model selection. Biometrika 83, 875–890.
• [15] Konishi, S. and Kitagawa, G. (2008). Information Criteria and Statistical Modeling. Springer, New York.
• [16] Ramsay, J. O. and Silverman, B. W. (2005). Functional Data Analysis. 2nd ed. Springer, New York.
• [17] Ruppert, D., Wand, M. P. and Carroll, R. J. (2003). Semiparametric Regression. Cambridge University Press.
• [18] Schwarz, G. (1978). Estimating the dimension of a model. Ann. Statist. , 461–464.
• [19] Verbeke, G. and Molenberghs, G. (2000). Linear Mixed Models for Longitudinal Data. Springer, New York.
• [20] Vonesh, E. F. and Chinchilli, V. M. (1997). Linear and Nonlinear Models for the Analysis of Repeated Measurements. Marcel Dekker, New York.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters