Linearized Binary Regression

Linearized Binary Regression

Andrew S. Lan, Mung Chiang, and Christoph Studer
AL and MC were supported in part by the US National Science Foundation (NSF) under grant CNS-1347234. CS was supported in part by Xilinx Inc. and by the US NSF under grants ECCS-1408006, CCF-1535897, CAREER CCF-1652065, and CNS-1717559. Princeton University, Princeton, NJ; andrew.lan@princeton.edu Purdue University, West Lafayette, IN; chiang@purdue.edu Cornell University, Ithaca, NY; studer@cornell.edu
Abstract

Probit regression was first proposed by Bliss in 1934 to study mortality rates of insects. Since then, an extensive body of work has analyzed and used probit or related binary regression methods (such as logistic regression) in numerous applications and fields. This paper provides a fresh angle to such well-established binary regression methods. Concretely, we demonstrate that linearizing the probit model in combination with linear estimators performs on par with state-of-the-art nonlinear regression methods, such as posterior mean or maximum a-posteriori estimation, for a broad range of real-world regression problems. We derive exact, closed-form, and nonasymptotic expressions for the mean-squared error of our linearized estimators, which clearly separates them from nonlinear regression methods that are typically difficult to analyze. We showcase the efficacy of our methods and results for a number of synthetic and real-world datasets, which demonstrates that linearized binary regression finds potential use in a variety of inference, estimation, signal processing, and machine learning applications that deal with binary-valued observations or measurements.

I Introduction

This paper deals with the estimation of the -dimensional vector from the following measurement model:

(1)

Here, the vector contains binary-valued measurements, the function operates element-wise on its argument and outputs for and otherwise, is a given design matrix (or matrix of covariates). The noise vector has i.i.d. random entries. Estimation of the vector from the observation model in (1) is known as binary regression. The two most common types of binary regression are (i) probit regression [1] for which the noise vector follows a standard normal distribution and (ii) logistic regression [2] for which the noise vector follows a logistic distribution with unit scale parameter.

Binary regression finds widespread use in a broad range of applications and fields, including (but not limited to) image classification [3], biomedical data analysis [4, 5], economics [6], and signal processing [7, 8]. In most real-world applications, one can use either probit or logistic regression, since the noise distribution is unknown; in this paper, we focus on probit regression for reasons that we will detail in Section II-A. In what follows, we will assume that the noise vector has i.i.d. standard normal entries, and refer to (1) as the standard probit model.

I-a Relevant Prior Art

I-A1 Estimators

The two most common estimation techniques for the standard probit model in (1) are the posterior mean (PM) and maximum a-posteriori (MAP) estimators. The PM estimator computes the following conditional expectation [9]:

(2)

where is the posterior probability of the vector given the observations under the model (1). The PM estimator is optimal in a sense that it minimizes the mean-squared error (MSE) defined as

(3)

and is, hence, also known as the nonlinear minimum mean-squared error (MMSE) estimator. Evaluating the integral in (2) for the probit model is difficult and hence, one typically resorts to rather slow Monte-Carlo methods [10]. By assuming that the vector is multivariate Gaussian, an alternative regression technique is the MAP estimator that solves the following convex optimization problem [11]:

(4)

Here, is the cumulative distribution function of a standard normal random variable, is the th row of the covariate matrix , and is the covariance matrix of the zero-mean multivariate Gaussian prior on the vector . By ignoring the prior on , one arrives at the well-known maximum-likelihood (ML) estimator. Compared to the PM estimator, MAP and ML estimation can be implemented efficiently either by solving a series of re-weighted least squares problems [12] or by using standard numerical methods for convex problems that scale favorably to large problem sizes [13, 14]. In contrast to such well-established nonlinear estimators, we will investigate linear estimators that are computationally efficient and whose performance is on par to that of the PM, MAP, and ML estimators.

I-A2 Analytical Results

Analytical results that characterize the performance of estimation under the probit model are almost exclusively for the asymptotic setting, i.e., when and/or  tend to infinity. More specifically, Brillinger [15] has shown in 1982 that the conventional least-squares (LS) estimators for scenarios in which the design matrix has i.i.d. Gaussian entries, delivers an estimate that is the same as that of the PM estimator up to a constant. More recently, Brillinger’s result has been generalized by Thrampoulidis et al. [16] to the sparse setting, i.e., where the vector  has only a few nonzero entries. Other related results analyze the consistency of the ML estimator for sparse logistic regression. These results are either asymptotic [17, 8, 18] or of probabilistic nature  [19]; the latter type of results bounds the MSE with high probability. In contrast to all such existing analytical results, we will provide nonasymptotic and exact expressions for the MSE that are valid for arbitrary and deterministic design matrices .

I-B Contributions

We propose novel linear estimators of the form for the probit model in (1), where are suitably-chosen estimation matrices, and provide exact, closed-form, and nonasymptotic expressions for the MSE of these estimators. Specifically, we will develop two estimators: a linear minimum mean-squared error (L-MMSE) estimator that aims at minimizing the MSE in (3) and a more efficient but less accurate least-squares (LS) estimator. Our MSE results are in stark contrast to existing performance guarantees for the MAP or PM estimators, for which a nonasymptotic performance analysis is, in general, difficult. We provide inference results on synthetic data, which suggest that the inference quality of the proposed linear estimators is on par with state-of-the-art nonlinear estimators, especially at low signal-to-noise ratio (SNR), i.e., when the quantization error is lower than the noise level. Moreover, we show using six different real-world binary regression datasets that the proposed linear estimators achieve competitive predictive performance to PM and MAP estimation at comparable or even lower complexity.

Ii Linearized Probit Regression

To develop and analyze linearized inference methods for the standard probit model in (1), we will first consider the following smoothed version of the probit model:

(5)

We will then use these results to study the binary model (1). Here, , is zero-mean Gaussian with known covariance , the sigmoid function is defined as and operates element-wise on its argument, is a smoothing parameter, and the vector is assumed to be zero-mean Gaussian with known covariance  and independent of .111We emphasize that these are standard model assumptions in Bayesian data analysis (see, e.g., [20]) and in numerous real-world applications, such as modeling user responses to test items [21]. We emphasize that as , the sigmoid function corresponds to the sign function and hence, the model in (5) includes the probit model in (1) as a special case. In what follows, we assume nondegenerate covariance matrices for and , i.e., we assume that  and are both invertible. We next introduce two new linear estimators for this model and then, provide exact, closed-form, and nonasymptotic expressions for the associated MSEs.

Ii-a Linear Minimum Mean-Squared Error Estimator

Our main result is as follows.

Theorem 1.

The linear minimum mean-squared error (L-MMSE) estimate for the generalized probit model in (5) is

(6)

where

(7)
(8)

and .

Remark 1.

The reason that we focus on probit regression is that under the standard probit model, the matrices and exhibit closed-form expressions; For logistic regression, such closed-form expressions do not exist.

Proof.

The proof consists of two steps. First, we linearize the model in (5). Then, we derive the L-MMSE estimate in (6) for the linearized model. The two steps are as follows.

Step 1 (Linearization): Let and

(9)

be a linearization of the generalized probit model in (5), where is a linearization matrix and is a residual error vector that contains noise and linearization artifacts. Our goal is to perform a Bussgang-like decomposition [22], which uses the linearization matrix  that minimizes the -norm of the residual error vector  averaged over the signal and noise. Concretely, let  be the covariance matrix of the vector and consider the optimization problem

which has a closed-form solution that is given by with . It can easily be verified that for this particular choice of the linearization matrix , the residual error vector and the signal of interest are uncorrelated, i.e., we have .

We now derive a closed-form expression for the entries of the matrix . Since both and are independent and zero-mean Gaussian, the bivariate is jointly Gaussian for each index pair . Moreover, we have and if  is a zero-mean Gaussian random variable. Hence, we can use the following result that is due to Brillinger [23, Lem. 1]:

(10)

where with being the th column of . Since for the function is absolutely continuous222The special case for can either be derived by directly evaluating in (10) or by first using Stein’s Lemma and then letting ; both approaches yield the same result., is zero-mean Gaussian, and , we can invoke Stein’s Lemma [24], which states that

(11)

with . Using , we can evaluate the right-hand side in (11) as

(12)

where denotes the probability density function of a Gaussian distribution with mean and variance evaluated at , , and . Combining (10) with (11) and (12) leads to

where (7) represents the entire matrix in compact notation.

Step 2 (L-MMSE Estimator): We have linearized the probit model as in (9) with . We now estimate from this linearization using the L-MMSE estimator. Since the residual distortion vector is uncorrelated to the vector , the L-MMSE estimator is given by

where is the covariance matrix of the generalized probit measurements in (5). The remaining piece is to calculate the individual entries of this matrix.

With abuse of notation, we start by deriving the necessary expressions for a general pair of correlated but zero-mean Gaussian random variables with covariance matrix . More specifically, we are interested in computing the quantity

Since

we have

(13)

Hence, we only need a closed-form expression for , which we derive using direct integration. We rewrite this expression as follows:

where the last equality follows from [25, Sec. 3.9] with . We now further simplify the above expression with the following steps:

Using the definitions and , we can rewrite the above expression as

To evaluate this expression, it is key to observe that it corresponds to the cumulative probability density of a 3-dimensional normal random variable with zero mean and an identity covariance matrix on a region cut by two planes. Imagine a cuboid with edge lengths . Assume without loss of generality. The first plane has the normal vector , while the second plane has the normal vector . To find a convenient way to evaluate this integral, we need to find an appropriate change of coordinates. Define the first new coordinate as the intersection of the two planes, along the direction of . With proper normalization, this implies . Then, we let the second coordinate be orthogonal to and also to the first plane, i.e., orthogonal to the normal vector of the first plane, . This gives . The third coordinate is simply , taken as the normal vector to the first plane. The unit vector in the second plane that is orthogonal to and is given by . Since the new coordinates form a Cartesian system and are properly normalized, the determinant of the Jacobian is one, and the covariance matrix of the 3-dimensional normal random variable remains an identity matrix. We first integrate over  to obtain

where we have used to denote the space to integrate over for the variables and . Since is the area between the directions of and in the 2-dimensional plane, we use polar coordinates and to get

Consequently, we have

which allows us, in combination with (13), to express the desired covariance matrix as in (8). ∎

For the L-MMSE estimator in Theorem 1, we can extract the MSE in closed form:

Lemma 2.

The MSE of the L-MMSE estimator in Theorem 1 is given by

Proof.

The proof follows from the MSE definition in (3) and the facts that and the two vectors and are uncorrelated for the L-MMSE estimator in (6). ∎

By letting the parameter in (5), we can use Theorem 1 and Lemma 2 to obtain the following corollary for the standard probit model in (1). This result agrees with a recent result in wireless communications [26].

Corollary 3.

The L-MMSE estimate for the standard probit model in (1) is , where

and . The associated MSE is given by .

Ii-B Least Squares (LS) Estimator

The L-MMSE estimator as in (6) requires the computation of  followed by a matrix inversion. For large-scale problems, one can avoid the matrix inversion by first solving for using conjugate gradients [13], followed by calculating . Hence, the complexity of L-MMSE estimation is comparable to that of MAP estimation. Computation of , however, cannot be avoided entirely.

Fortunately, there exists a simpler linear estimator that avoids computation of altogether, which we call the least-squares (LS) estimator. Concretely, let and consider the linearization in (9), which is . By ignoring the residual error vector and by assuming that the columns of are linearly independent, we can simply invert the matrix , which yields the LS estimate

(14)

where is the left pseudo-inverse of . Again, one can use conjugate gradients to implement (14). In contrast to the L-MMSE estimator, the LS estimator does not require knowledge of , which makes it more efficient yet slightly less accurate (see the experimental results section for a comparison). As for the L-MMSE estimator, we have a closed-form expression for the MSE of the LS estimator.

Lemma 4.

Assume that exists. Then, the MSE of the LS estimator in (14) is given by

Proof.

The proof follows from the MSE definition (3), and the facts that and . ∎

Iii Numerical Results

We now experimentally demonstrate the efficacy of the proposed linear estimators.

Iii-a Experiments with Synthetic Data

We first compare the MSE of our estimators to that of the nonlinear MAP and PM estimators using synthetic data.

(a) , .
(b) , .
(c) , .
(d) , .
(e) , .
(f) , .
Fig. 1: Mean squared error (MSE) versus signal-to-noise ratio (SNR) for various problem sizes and dimensions . We see that for most parameter settings, the MSE of the proposed L-MMSE estimator is comparable to that of the optimal PM estimator; MAP estimation and LS estimation do not work as well at high and low SNR, respectively.

Iii-A1 Experimental Setup

We set the dimensions of to and the number of measurements to . We first generate a single random matrix of size with i.i.d. standard normal entries, and normalize each row to have unit -norm. Then, we generate the entries of from a multivariate normal distribution with zero mean and covariance matrix . The entries of the noise vector are i.i.d. zero-mean Gaussian with variance . The vector  is generated using the standard probit model in (1). We sweep the SNR defined as by changing the noise variance . For the PM estimator, we use a standard Gibbs sampling procedure [10]; we use the mean of the generated samples over iterations as the PM estimate after a burn-in phase of iterations. For the MAP estimator, we use an accelerated gradient-descent procedure [14, 27] to solve (4) up to machine precision with a maximum number of iterations. We repeat all experiments for trials and report the empirical MSE.

Iii-A2 Results and Discussion

Fig. 1 shows the MSE of the L-MMSE, LS, MAP, and PM estimators. We do not show LS for and as it does not exist if . We see that at low SNR ( dB), the L-MMSE, MAP, and PM estimators achieve a similar MSE. Hence, linearizing the probit model does not entail a noticeable performance degradation if the measurements are noisy. At higher SNR, the performance of the different estimators varies. For a small number of measurements, the performance of L-MMSE estimation is superior to MAP estimation. For a large number of measurements (e.g., ), the MSE of L-MMSE estimation is slightly higher than that of MAP estimation for some SNR values. We note that the MSE performance of MAP degrades with increasing SNR, and we observe that there is an optimal SNR level for MAP estimation; this observation is in line with those reported in [28] for 1-bit matrix completion using ML-type estimators. Per design, PM estimation achieves the lowest MSE for all configurations, but is notoriously slow. We conclude that linearized probit regression entails a negligible MSE performance loss compared to PM estimation, for a wide range of parameter settings.

L-MMSE LS MAP PM Logit-MAP
Admissions
Lowbwt
Polypharm
Myopia
Uis
SAheart
TABLE I: Mean and standard deviation of prediction quality in terms of prediction accuracy (ACC) for the L-MMSE, LS, MAP, PM, and Logit-MAP estimators on various real-world datasets.
L-MMSE LS MAP PM Logit-MAP
Admissions
Lowbwt
Polypharm
Myopia
Uis
SAheart
TABLE II: Prediction quality in terms of the area under the receiver operating characteristic curve (AUC) for the L-MMSE, LS, MAP, PM, and Logit-MAP estimators on various real-world datasets.

Iii-B Experiments with Real-World Data

We now validate the performance of the proposed linearized estimators using a variety of real-world datasets. Since the noise model in real-world datasets is generally unknown, we also consider the performance of MAP estimation using the logistic noise model (indicated by “Logit-MAP”).

Iii-B1 Datasets

We use a range of standard binary regression datasets in this experiment. These datasets include (i) “Admissions”, which consists of binary-valued graduate school admission outcomes and features of the applicants, with and , (ii) “Lowbwt”, which consists of low child birthweight indicators and features of their parents, with and , (iii) “Polypharm”, which consists of whether an adult takes more than one type of prescription and their features, with and , (iv) “Myopia”, which consists of myopia test outcomes for adults and their personal features, with and , (v) “Uis”, which consists of treatment outcomes for AIDS patients and their personal features, with and , and (vi) “SAheart”, which consists of whether a person has heart disease and their features, with and . The first five datasets are taken from [29] and the last one is from [12].

Iii-B2 Experimental Setup

We evaluate the prediction quality of the L-MMSE, LS, MAP, PM, and Logit-MAP estimators using five-fold cross validation. We randomly divide the entire dataset into five nonoverlapping subsets, use four folds of the data as the training set and the other fold as the test set. We use the training set to estimate , and use it to predict the binary-valued outcomes on the test set. For all experiments, we fix . Since the variance serves as a regularization parameter for , we select an optimal value of using grid search on a separate validation set. To assess the performance of these estimators, we deploy the two most common metrics that characterize prediction quality: prediction accuracy (ACC) and area under the receiver operating characteristic curve (AUC) [30]. Both metrics take values in and larger values indicate better prediction performance.

Iii-B3 Results and Discussion

Tables I and II show the mean and standard deviation of the performance of each estimator on both metrics across 20 random training/test partitions of the datasets. We observe that the performance of L-MMSE, MAP, PM, and Logit-MAP are virtually indistinguishable on most datasets. LS estimation is, with a few exceptions, slightly worse than all the other estimators.

We find it surprising that linearized probit regression performs equally well as significantly more sophisticated nonlinear estimators on a broad range of real-world datasets. We also note that the proposed linearized estimators can be implemented efficiently and scale well to large datasets, which is in stark contrast to the PM estimator.

Iv Conclusions

We have shown that linearizing the well-known probit regression model in combination with linear estimators is able to achieve comparable estimation performance to nonlinear methods such as MAP and PM estimators for binary regression problems. Our linear estimators enable an exact, closed-form, and nonasymptotic MSE analysis, which is in stark contrast to existing analytical results for the MAP and PM estimators. We hence believe that the proposed linear estimators have the potential to be used in a variety of machine learning or statistics applications that deal with binary-valued observations.

References

  • [1] C. Bliss, “The method of probits,” Science, vol. 79, no. 2037, pp. 38–39, Jan. 1934.
  • [2] D. Cox, “The regression analysis of binary sequences,” J. R. Stat. Soc. B Met., vol. 20, no. 2, pp. 215–242, Mar. 1958.
  • [3] Y. Qian, M. Ye, and J. Zhou, “Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features,” IEEE Trans. Geosci. Remote Sens., vol. 51, no. 4, pp. 2276–2291, Sep. 2013.
  • [4] G. Cawley and N. Talbot, “Gene selection in cancer classification using sparse logistic regression with Bayesian regularization,” Bioinformatics, vol. 22, no. 19, pp. 2348–2355, July 2006.
  • [5] J. Zhu and T. Hastie, “Classification of gene microarrays by penalized logistic regression,” Biostatistics, vol. 5, no. 3, pp. 427–443, July 2004.
  • [6] T. Amemiya, “Qualitative response models: A survey,” J. Econ. Lit., vol. 19, no. 4, pp. 1483–1536, Dec. 1981.
  • [7] B. Krishnapuram, L. Carin, M. Figueiredo, and A. Hartemink, “Sparse multinomial logistic regression: Fast algorithms and generalization bounds,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 6, pp. 957–968, July 2005.
  • [8] Y. Plan and R. Vershynin, “Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach,” IEEE Trans. Inf. Th., vol. 59, no. 1, pp. 482–494, Sep. 2013.
  • [9] H. V. Poor, An Introduction to Signal Detection and Estimation.   Springer Science & Business Media, 2013.
  • [10] J. H. Albert and S. Chib, “Bayesian analysis of binary and polychotomous response data,” J. Am. Stat. Assoc., vol. 88, no. 422, pp. 669–679, June 1993.
  • [11] C. Bliss, “The calculation of the dosage-mortality curve,” Ann. Appl. Biol., vol. 22, no. 1, pp. 134–167, Feb. 1935.
  • [12] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning.   Springer, 2010.
  • [13] J. Nocedal and S. Wright, Numerical Optimization.   Springer, 2006.
  • [14] T. Goldstein, C. Studer, and R. G. Baraniuk, “A field guide to forward-backward splitting with a FASTA implementation,” arXiv preprint: 1411.3406, Nov. 2014.
  • [15] D. Brillinger, A Festschrift for Erich L. Lehmann, ser. Wadsworth Statistiscs/Probability Series.   Chapman & Hall/CRC, 1982, ch. “A generalized linear model with “Gaussian” regressor variables”, pp. 97–114.
  • [16] C. Thrampoulidis, E. Abbasi, and B. Hassibi, “Lasso with non-linear measurements is equivalent to one with linear measurements,” in Proc. Adv. Neural Info. Proc. Syst., Dec. 2015, pp. 3420–3428.
  • [17] F. Bunea, “Honest variable selection in linear and logistic regression models via and penalization,” Electron. J. Stat., vol. 2, pp. 1153–1194, 2008.
  • [18] P. Ravikumar, M. Wainwright, and J. Lafferty, “High-dimensional Ising model selection using -regularized logistic regression,” Ann. Stat., vol. 38, no. 3, pp. 1287–1319, June 2010.
  • [19] F. Bach, “Self-concordant analysis for logistic regression,” Electron. J. Stat., vol. 4, pp. 384–414, 2010.
  • [20] P. D. Hoff, A First Course in Bayesian Statistical Methods.   Springer, 2009.
  • [21] F. Lord, Applications of Item Response Theory to Practical Testing Problems.   Erlbaum Associates, 1980.
  • [22] J. Bussgang, “Cross-correlation function of amplitude-distorted gaussian signals,” MIT, Res. Lab. Elec. Tech. Rep. 216, Mar. 1952.
  • [23] D. Brillinger, “The identification of a particular nonlinear time series system,” Biometrika, vol. 64, no. 3, pp. 509–515, Dec. 1977.
  • [24] C. Stein, “Estimation of the mean of a multivariate normal distribution,” Ann. Stat., vol. 9, no. 6, pp. 1135–1151, Nov. 1981.
  • [25] C. Rasmussen and C. Williams, Gaussian Process for Machine Learning.   MIT Press, 2006.
  • [26] Y. Li, C. Tao, G. Seco-Granados, A. Mezghani, A. Swindlehurst, and L. Liu, “Channel estimation and performance analysis of one-bit massive MIMO systems,” IEEE Trans. Sig. Proc., vol. 65, no. 15, pp. 4075–4089, Aug. 2017.
  • [27] A. S. Lan, A. E. Waters, C. Studer, and R. G. Baraniuk, “Sparse factor analysis for learning and content analytics,” J. Mach. Learn. Res., vol. 15, pp. 1959–2008, June 2014.
  • [28] M. A. Davenport, Y. Plan, E. van den Berg, and M. Wootters, “1-bit matrix completion,” Inf. Infer., vol. 3, no. 3, pp. 189–223, Sep. 2014.
  • [29] D. Hosmer Jr, S. Lemeshow, and R. Sturdivant, Applied Logistic Regression.   John Wiley & Sons, 2013.
  • [30] H. Jin and C. Ling, “Using AUC and accuracy in evaluating learning algorithms,” IEEE Trans. Knowl. Data Eng., vol. 17, no. 3, pp. 299–310, Mar. 2005.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
36257
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description