A Algorithmic details

Copula based generalized additive models with non-random sample selection

Abstract

Non-random sample selection is a commonplace amongst many empirical studies and it appears when an output variable of interest is available only for a restricted non-random sub-sample of data. We introduce an extension of the generalized additive model which accounts for non-random sample selection by using a selection equation. The proposed approach allows for different distributions of the outcome variable, various dependence structures between the (outcome and selection) equations through the use of copulae, and nonparametric effects on the responses. Parameter estimation is carried out within a penalized likelihood and simultaneous equation framework. We establish asymptotic theory for the proposed penalized spline estimators, which extends the recent theoretical results for penalized splines in generalized additive models, such as those by Kauermann et al. (2009) and Yoshida & Naito (2014). The empirical effectiveness of the approach is demonstrated through a simulation study.

Key Words: copula, generalized additive model, non-random sample selection, penalized regression spline, selection bias, simultaneous equation estimation.

1 Introduction

Non-random sample selection arises when an output variable of interest is available only for a restricted non-random sub-sample of the data. This often occurs in sociological, medical and economic studies where individuals systematically select themselves into (or out of) the sample (see, e.g., Guo & Fraser (2014), chapter 4, Lennox et al. (2012), Vella (1998), Collier & Mahoney (1996) and references therein). If the aim is to model an outcome of interest in the entire population and the link between its availability in the sub-sample and its observed values is through factors which can not be accounted for then any analysis based on the available sub-sample only will yield erroneous model structures and biased conclusions as the resulting inference may not extend to the unobserved group of observations. Sample selection models allow us to use the entire sample whether or not observations on the output variable were generated. In its classical form, it consists of two equations which model the probability of inclusion in the sample and the outcome variable through a set of available predictors, and of a joint bivariate distribution linking the two equations.

The sample selection model was first introduced by Gronau (1974), Lewis (1974) and Heckman (1974), in the context of estimating the number of working hours and wage rates of married women, some of whom did not work. Heckman (1976) formulated a unified approach to estimating this model as a simultaneous equation system. In the classical version, the error terms of the two equations are assumed to follow a bivariate normal distribution in which non-zero correlation indicates the presence of non-random sample selection. Heckman (1979) then translated the issue of sample selection into an omitted variable problem and proposed a simple and easy to implement estimation method known as two-step procedure. However, the method was proved to strongly rely on the assumption of normality and thus has been criticized for its lack of robustness to distributional misspecification and outliers (e.g., Paarsch, 1984; Little & Rubin, 1987; Zuehlke & Zeman, 1991; Zhelonkin, 2013).

Various modifications and generalizations of the classical sample selection model have been proposed in the literature and we mention some of them. A non-parametric approach, which lifts the normality assumption, can be found in Das et al. (2003). Here a two-stage Heckman’s method is adopted and a linear regression with the Heckman’s selection correction is replaced with series expansions. Non-parametric methods are also considered in Lee (2008) and Chen & Zhou (2010). Semi-parametric techniques in a two-step scenario, similar to that in Das et al. (2003), are given in Ahn & Powell (1993), Newey (1999) and Powell (1994). Other semi-parametric approaches can be found in Gallant & Nychka (1987), Powell et al. (1989), Lee (1994a), Lee (1994b), Andrews & Schafgans (1998) and Newey (2009). In the Bayesian framework, Chib et al. (2009) deal with non-linear covariate effects using Markov chain Monte Carlo simulation techniques and simultaneous equation systems. Wiesenfarth & Kneib (2010) further extend this approach by introducing a Bayesian algorithm based on low rank penalized B-splines for non-linear and varying-coefficient effects and Markov random-field priors for spatial effects. A frequentist counterpart of these Bayesian methods is discussed in Marra & Radice (2013b) in the context of binary outcomes and Marra & Radice (2013a) for the continuous outcome case. Zhelonkin et al. (2012) introduce a procedure for robustifying the Heckman’s two stage estimator by using M-estimators of Mallows’ type for both stages. Marchenko & Genton (2012) and Ding (2014) consider a bivariate Student-t distribution for the model’s errors as a way of tackling heavy-tailed data. Several authors proposed specific copula functions to model the joint distribution of the selection and outcome equations; see, e.g., Prieger (2002) who employes an FGM bivariate copula or Lee (1983). In the context of non-random sample selection, a more general copula approach, with a focus on Archimedean copulae, can be found in Smith (2003). This stream of research is continued in Hasebe & Vijverberg (2012) and Schwiebert (2013). As emphasized by Genius & Strazzera (2008), copulae allow for the use of non-Gaussian distributions and has the additional benefit of making it possible to specify the marginal distributions independently of the dependence structure linking them. Importantly, while the copula approach is still fully parametric, it is typically computationally more feasible than non/semi-parametric approaches and it still allows the user to assess the sensitivity of results to different modeling assumptions.

The aim of this paper is to introduce a generalized additive model (GAM) which accounts for non-random sample selection. Thus, the classical GAM is extended by introducing an extra equation which models the selection process. The selection and outcome equations are linked by a joint probability distribution which is expressed in terms of a copula. Using different copulae allows us to capture different types of dependence while keeping the marginal distributions of the responses fixed. This approach is flexible and numerically tractable at the same time. Secondly, we make a step towards greater flexibility in modeling the relationship between predictors and responses by using a semiparametric approach in place of commonly used parametric formulae, thus capturing possibly complex non-linear relationships. We use penalized regression splines to approximate these relationships and employ a penalized likelihood and simultaneous equation estimation framework as presented in Marra & Radice (2013a), for instance. Penalized splines were introduced in O’Sullivan (1986) and Eilers & Marx (1996) and practical aspects of their use received much attention ever since; see Ruppert et al. (2003) and Wood (2006). At the same time, the asymptotic theory for penalized spline estimators is relatively new and mainly focuses on models with one continuous predictor (e.g., Hall & Opsomer, 2005; Claeskens et al., 2009; Wang et al., 2011). In the GAM context, the asymptotic normality of the penalized spline estimator has been proved by Kauermann et al. (2009) and recently extended by Yoshida & Naito (2014) to the case of any fixed number of predictors. This paper discusses the asymptotic properties of the penalized spline estimator of the proposed model. The asymptotic rate of the mean squared error of the linear predictor for the regression equation of interest is derived, which also allows us to derive its asymptotic bias and variance as well as its approximate distribution. The theoretical results are obtained for a general case of any fixed number of predictors and when the spline basis increases with the sample size. We show that even though the model structure is more complex than that of a classical GAM, the asymptotic properties of penalized spline estimators are still valid. Thus, the results extend the theoretical foundation of GAMs established so far.

The paper is organized as follows. Section 2 briefly describes the classical sample selection model as well as Heckman’s estimation procedure. Section 3 introduces a generalized sample selection model based on GAMs. Section 4 succinctly describes the estimation approach which is based on a penalized likelihood and simultaneous equation framework. In Section 5, the asymptotic properties of the proposed penalized spline estimator are established. The finite-sample performance of the approach is investigated in Section 6. Section 7 discusses some extensions.

2 Classical sample selection model

The classical sample selection model introduced by Heckman (1974) and Heckman (1979) is defined by the system of the following equations

(1)
(2)
(3)
(4)

for , where and are unobserved latent variables and only values of and are observed. The symbol denotes throughout an indicator function. Equation (1) is the so-called selection equation determining whether the observation of is generated, and equation (2) is the output equation of primary interest. Row vectors and contain predictors’ values and are observed for the entire sample whereas and are unknown parameter vectors. The classical sample selection model assumes that the error terms follow the bivariate normal distribution

for . The variance of is assumed to be equal to 1 for the usual identification reason. The normality of implies that the selection equation represents a probit regression expressed in terms of a latent variable. The assumption of bivariate normality of error terms has been commonly used since. It implies that the log-likelihood of model (1)-(4) can be expressed as (cf. Amemiya (1985) and eq. (10.7.3) therein)

where and denote the standard normal distribution function and density, respectively. Under the model assumptions, maximization of the above function, referred to as full-information maximum likelihood, results in consistent estimators of the parameters , , and but until recently was computationally cumbersome. As a consequence Heckman (1979) proposed a two-step procedure, also known as limited-information maximum likelihood method. It is based on the fact that under the assumption of normality of the error terms the following equality holds

where is the so-called inverse Mills ratio. Thus the conditional expectation of the outcome variable given that its value is observed equals

In the first step of Heckman’s procedure, probit model (1) is fitted in order to obtain an estimator of and hence an estimator of . In the second step the following regression equation is estimated through the ordinary least squares method

using the selected sub-sample only, where is an added variable and is the corresponding unknown parameter. After obtaining estimators and , parameter can be estimated as where is the size of the sub-sample, i.e. the cardinality of (see, for instance, Toomet & Henningsen (2008)). Thus the estimator of the correlation coefficient is defined as .

Although the inverse Mills ratio is known to be nonlinear, in practice it often turns out to be approximately linear for most values in the range. This may lead to identification issues if both equations include the same set of predictors (Puhani, 2000, p. 57). Thus, an exclusion restriction, which requires the availability of at least one predictor which is related to selection process but that has no direct effect on the outcome, has to be used in empirical applications. As mentioned in the introduction, Heckman’s model received a number of criticisms due to its lack of robustness to departures from the model assumptions (see, e.g., Paarsch, 1984; Little & Rubin, 1987; Zuehlke & Zeman, 1991; Zhelonkin, 2013, and references therein).

3 Generalized additive sample selection model

We structure the proposed sample selection model in the following way. We first assume that the outcome variable of interest can be described by a generalized additive model (Hastie & Tibshirani, 1990). Then, to take the selection process into account we extend the model by adding a selection equation, which is also in the form of a generalized additive model. Finally, the two model equations are linked using a bivariate copula.

3.1 Random component

Let and denote the cumulative distribution functions of the latent selection variable and output variable of interest , respectively. Analogically, and denote the density functions of and . We assume that has density that belongs to an exponential family of distributions, i.e. it is of the form

(5)

for some specific functions and , where is the natural parameter and is the scale parameter. For simplicity, we assume that , so that

(6)

It holds and , where and are the first and second derivatives of function , respectively (van der Vaart, 2000, p. 38).

Moreover, we assume that the latent selection variable follows a normal distribution with mean and variance equal to ,

(7)

The assumption is needed for the usual identification purpose. Then the observables are defined as before,

implying the probit regression model for the selection variable .

We specify the dependence structure between the two variables by taking advantage of the Sklar’s theorem (Sklar, 1959). It states that for any two random variables there exists a two-place function, called copula, which represents the joint cumulative distribution function of the pair in a manner which makes a clear distinction between the marginal distributions and the form of dependence between them. Thus a copula is a function that binds together the margins in order to form the joint cdf of the pair . An exhaustive introduction to copula theory can be found in Nelsen (2006), Joe (1997) and Schweizer (1991). As in our model we would like to be able to specify the marginal distributions of and , the use of copulae is a convenient approach which allows us to achieve modeling flexibility. Often copulae are parametrized with the so called association parameter which while varying leads to families of copulae with different strength of dependence. Thus, we use the symbol throughout to denote a copula parametrized with . Let the function be the copula such that the joint cdf of is equal to

(8)

Function always exists and is unique for every in the support of the joint distribution . Then the joint density of takes the form:

The log-likelihood function for such defined sample selection model can be obtained by conditioning with respect to the value of the selection variable (cf. Smith (2003), p. 108). If then the likelihood takes the simple form of , which is equivalent to . Otherwise, the likelihood can be expressed, using the multiplication rule, as , where denotes the conditional probability density function of given . After substituting the conditional density by , we obtain the following log-likelihood

from which, using (8), we obtain

where

The function can be also expressed as

Thus it is directly related to the conditional distribution of the variable for the unobserved data.

Using (6), the log-likelihood can be written as

(9)

The fact that implies

where . Note that the first term in the expression above is equal to the score for a classical generalized linear model, i.e. when the sample selection does not appear and thus always equals 1 and . The second term corrects the score for sample selection bias. Interestingly, using the fact that the expected value of the score equals zero, we obtain that the expected value of the second term of the score equals minus the covariance between and , i.e.

which provides another interesting interpretation of function .

Archimedean copulae

Although any copula can be used to link the two model equations (6) and (7), the class of Archimedean copulae is particularly attractive for fitting the model in practice. This is because it provides many useful distributions that posses the advantage of analytical simplicity and dimension reduction because they are generated by a given function of only one argument such that

Function is called a generator function and is assumed to be additive, continuous, convex, decreasing and meets the condition . Table 1 lists several popular bivariate Archimedean copulae whereas Figure 1 shows the contour plots of bivariate densities for normal and three Archimedean copulae (Clayton, Joe and Frank).
For Archimedean copulae we have

where and . Thus

Hence the selection bias is equal to

Figure 1: Contour plots of bivariate densities for different copulae. The margins follow a standard normal distribution and a gamma distribution with shape 2 and rate 1. The value of dependence parameter is set to provide Kendall’s for all four distributions.
Name Parameter space Generator
Clayton
Joe
Frank
Gumbel
AMH
Table 1: Families of bivariate Archimedean copulae, with corresponding parameter range of the association parameter and generator .

3.2 Systematic component

Terms and are assumed to depend on sets of predictors and , respectively, so that and , where and . Moreover, we assume the following additive form of the functions and

and

Functions and are unknown. We use spline basis functions to represent the unknown smooth functions. Specifically, we consider B-splines (De Boor (2001)), which have attractive numerical and theoretical properties.

B-spline approximation

On interval define sequence of knots and another knots and . Then B-spline basis functions of degree are defined recursively as

for , with

This gives basis functions , …, .

We approximate by a linear combination of basis functions:

and, similarly, by

In the remaining of the paper we omit the subscript of basis functions and we use symbols , …, to denote -th B-spline basis functions. We define vectors and as

and

and vectors

and

Moreover, we use the following notation throughout. For a given , assume that are independent random variables related to predictors’ values , …, and , …, for such that and , where has density (7) with and is distributed according to (6) with .

Let , denote the distribution functions of and and let be the joint cdf of the pair . Moreover, for let be a matrix defined through

and for let be a matrix defined as

Then let and equal

and

With the B-spline approximation, we postulate the parametric model

(10)

where and denote the -th rows of the matrices and , respectively.

4 Some estimation details

In order to estimate parameters , and , we employ a penalized likelihood approach which is common for regression spline models. Based on (9) and (10), the log-likelihood given a random sample equals

(11)

where . The normality of implies that , where denotes the standard normal distribution function. The penalized log-likelihood equals

(12)

where is the -th difference matrix (Marx and Eilers, 1998), , , and and are smoothing parameters controlling the trade-off between smoothness and fitness.

For a given , we seek to maximize (12). In practice, an iterative procedure based on a trust region approach can be used to achieve this. This method is generally more stable and faster than its line-search counterparts (such as Newton-Raphson), particularly for functions that are, for example, non-concave and/or exhibit regions that are close to flat; see Nocedal & Wright (2006, Chapter 4) for full details. Such functions can occur relatively frequently in bivariate models, often leading to convergence failures (Andrews, 1999; Butler, 1996; Chiburis et al., 2012). Some details related to the process of iterative maximization of the penalized likelihood via a trust region algorithm are presented in subsection A.1 of Appendix A.

Data-driven and automatic smoothing parameter estimation is pivotal for practical modeling, especially when the data are partly censored as in our case, and each model equation contains more than one smooth component. An automatic approach allows us to determine the shape of the smooth functions from the data, hence avoiding arbitrary decisions by the researcher as to the relevant functional form for continuous variables. For single equation spline models, there are a number of methods for automatically estimating smoothing parameters within a penalized likelihood framework; see Ruppert et al. (2003) and Wood (2006) for excellent detailed overviews. In our context, we propose to use the smoothing approach based on Un-Biased Risk Estimator as detailed in subsection A.2 of Appendix A.

5 Asymptotic theory

In this section, the asymptotic consistency of the linear predictor based on the penalized maximum likelihood estimators for the regression equation of interest is shown. In particular, the asymptotic rate of the mean squared error of is derived. The theoretical considerations also allows to derive its asymptotic bias and variance and its approximate distribution.

First, we introduce the notation that we use throughout. Denote and let

where

and

Analogically, let

Proceeding to the hessian matrix, let us denote

Moreover, let

Analogically, let

and so on, and .

Let

Let denote a parameter vector that satisfies the condition . Then is maximizer of the expected unpenalized log-likelihood and provides the best approximation of in terms of Kullback-Leibler measure as it minimizes Kullback-Leibler distance. We adopt the following assumptions:

A1.

All partial derivatives up to the order 3 of copula function w.r.t , and exist and are bounded.

A2.

The function is bounded away from .

A3.

where .

A4.

The explanatory variables and are distributed on unit cubes and , respectively.

A5.

The knots of the B-spline basis are equidistantly located so that for and the dimension of the spline basis satisfies .

A6.

is such that .

Theorem. Under assumptions (A1)-(A6) the estimate has asymptotic expansion

which implies

Before we proceed to the proof of the theorem we derive analytic formulae for the gradient and hessian of the penalized log-likelihood as their properties will play a central role in the asymptotic considerations.

Gradient of the penalized likelihood

Straightforward calculations yield