Tools for Conditional Density Estimation

Conditional Density Estimation Tools in Python and R
with Applications to Photometric Redshifts and Likelihood-Free Cosmological Inference

Niccolò Dalmasso Department of Statistics & Data Science, Carnegie Mellon University, USA Niccolò Dalmasso, ndalmass@stat.cmu.edu Taylor Pospisil Google LLC, Mountain View, USA Ann B. Lee Department of Statistics & Data Science, Carnegie Mellon University, USA Rafael Izbicki Department of Statistics, Federal University of São Carlos, Brazil Peter E. Freeman Department of Statistics & Data Science, Carnegie Mellon University, USA Alex I. Malz German Centre of Cosmological Lensing, Ruhr-Universität Bochum, Germany Center for Cosmology and Particle Physics, New York University, USA
Abstract

It is well known in astronomy that propagating non-Gaussian prediction uncertainty in photometric redshift estimates is key to reducing bias in downstream cosmological analyses. Similarly, likelihood-free inference approaches, which are beginning to emerge as a tool for cosmological analysis, require the full uncertainty landscape of the parameters of interest given observed data. However, most machine learning (ML) based methods with open-source software target point prediction or classification, and hence fall short in quantifying uncertainty in complex regression and parameter inference settings such as the applications mentioned above. As an alternative to methods that focus on predicting the response (or parameters) from features , we provide nonparametric conditional density estimation (CDE) tools for approximating and validating the entire probability density given training data for and . This density approach offers a more nuanced accounting of uncertainty in situations with, e.g., nonstandard error distributions and multimodal or heteroskedastic response variables that are often present in astronomical data sets. As there is no one-size-fits-all CDE method, and the ultimate choice of model depends on the application and the training sample size, the goal of this work is to provide a comprehensive range of statistical tools and open-source software for nonparametric CDE and method assessment which can accommodate different types of settings — involving, e.g., mixed-type input from multiple sources, functional data, and image covariates — and which in addition can easily be fit to the problem at hand. Specifically, we introduce CDE software packages in Python and R based on four ML prediction methods adapted and optimized for CDE: NNKCDE, RFCDE, FlexCode, and DeepCDE. Furthermore, we present the cdetools package, which includes functions for computing a CDE loss function for model selection and tuning of parameters, together with diagnostic functions for computing posterior quantiles and coverage probabilities. We provide sample code in Python and R as well as examples of applications to photometric redshift estimation and likelihood-free cosmology via CDE.

Nonparametric statistics, Statistical software, Statistical computing, methods: data analysis, galaxies: distances and redshifts, cosmology: cosmological parameters

1 Introduction

Machine learning (ML) has seen a surge in popularity in almost all fields of astronomy that involve massive amounts of complex data (Ball & Brunner, 2010; Way et al., 2012; Ivezić et al., 2014; Ntampaka et al., 2019). Most ML methods target prediction and classification, whose primary goal is to return a point estimate of an unknown response variable (e.g., a supernova’s type, or a galaxy’s mass or age) given observed features or covariates (e.g., a supernova’s light curve, or a galaxy’s spectrum, respectively), often falling short in quantifying nontrivial uncertainty in (e.g., degeneracies in the mapping from to ). Neglecting to propagate these uncertainties through complex analyses may lead to imprecise or even inaccurate inferences of physical parameters.

Consider the following two examples of problems where uncertainty quantification can be impactful:

  • Photometric redshift estimation. In photometric redshift (photo-) estimation, one attempts to constrain a galaxy’s Doppler shift or redshift () after observing the shifted spectrum using a handful of broadband filters, and sometimes additional variables such as morphology and environment. In a prediction setting, the response could be the galaxy’s redshift but could also include other galaxy properties () such as the galaxy’s mass or age; the covariates would represent the collection of directly observable inputs such as photometric magnitudes and colors used to predict or more generically a multivariate response . The use of photo- posterior estimates — that is, estimates of the probability density functions (PDFs) for individual galaxies — is crucial for cosmological inference from photometric galaxy surveys, as of currently cataloged galaxies are observed solely via photometry, a percentage that will only grow in the coming decade as the Large Synoptic Survey Telescope (LSST) begins gathering data (Ivezić et al., 2019). Though the benefits of using posteriors over have yet to be fully exploited (Viironen et al., 2018), it is thoroughly established that down-stream cosmological analysis can be much improved by properly propagating photo- estimate uncertainties via probability density functions (PDFs) rather than simple point predictions of (Mandelbaum et al., 2008; Wittman, 2009; Sheldon et al., 2012; Carrasco Kind & Brunner, 2013; Graff et al., 2014; Schmidt et al., 2019).

  • Forward-modeled observables in cosmology. Some cosmological probes, such as the type Ia supernova (SN Ia) distance-redshift relationship, have observables that are straightforward to simulate in spite of an intractable likelihood. Likelihood-Free Inference (LFI) methods allow for parameter inference in settings where the likelihood function, which relates the observable data to the parameters of interest , is too complex to work with directly, but one is able to simulate from a stochastic forward model at fixed parameter settings . The most common approach to LFI or simulation-based parameter inference is Approximate Bayesian Computation (ABC), whose many variations (see Beaumont et al. 2002 and Sisson et al. 2018 for a review) repeatedly draw from the simulator and compare the output with observed data in real time to arrive at a set of plausible parameter values consistent with . With computationally intensive simulations, however, a classical ABC approach may not be practical. Furthermore, there is also the additional challenge in ABC of how to tell which ABC estimate is closest to the “true posterior,” and similarly how to choose “approximately sufficient” summary statistics in the analysis, if the underlying distribution of data and parameters is unknown.

From a statistical perspective, the right tool for quantifying the uncertainty about once is observed is the conditional density . In a prediction context such as for photo- problems, conditional density estimation (CDE) of the probability density functions for the redshift of individual galaxies given photometric data provides a more nuanced accounting of uncertainty than a point estimate or prediction interval, especially in the presence of heteroskedastic or multimodal responses. Similarly, CDE and LFI/ABC can be seamlessly combined into a training-based approach that has gained recent attention in machine learning and astronomy with the growing popularity of Deep Learning; for example, neural density estimation in LFI via mixture density networks and masked autoregressive flows (Papamakarios & Murray, 2016; Lueckmann et al., 2017, 2019; Papamakarios et al., 2019; Greenberg et al., 2019; Alsing et al., 2018, 2019) and LFI via CDE using Gaussian copulas (Li et al., 2017; Chen & Gutmann, 2019) and random forests (Marin et al., 2016). More concretely, in LFI settings with a cosmological forward model, CDE can be used to approximate the posterior probability distribution of cosmological parameters given observed data ; see Section 4.2 for an example. Though some recent work has performed LFI via CDE using Gaussian copulas (Li et al., 2017; Chen & Gutmann, 2019) and random forests (Marin et al., 2016), we follow Izbicki et al. (2014, 2019) to combine ABC and CDE to estimate the uncertainty about the unknown parameters — with our notation the “response” — given observed data by directly applying CDE training-based techniques (Section 2) and loss functions (Section 3.1) to simulated data .

Data in astronomy present a challenge to estimating conditional densities, due to both the complexity of the underlying physical processes and the complicated observational noise. Precision cosmology, for example, requires combining data from different scientific probes, producing complicated likelihood functions in a high-dimensional parameter space. Recent examples of such multiple-probe studies include Krause et al. (2017), Joudaki et al. (2017), Aghanim et al. (2018), van Uitert et al. (2018), and Abbott et al. (2019). These measurements are all subject to different forms of systematic uncertainties, already difficult to quantify, complicating the problem of constructing probability models that jointly describe these disparate data and their inter-relationships. In such situations, CDE methods that target a variety of settings and non-standard data (images, correlation functions, mixed data types) become especially valuable. However, for any given data type, there is no one-size-fits-all CDE method. For example, deep neural networks often perform well in settings with large amounts of representative training data but in applications with smaller training samples one may need a different tool. There is also additional value in models that are interpretable and easy to fit to the data at hand.

The goal of this paper is to provide statistical tools and open-source software for nonparametric CDE and method assessment appropriate for challenging data in a variety of inference scenarios. To our knowledge, there are no other publicly available implementations of non-trivial machine learning algorithms for uncertainty quantification via CDE that apply to the same range of data settings and that use the same syntax as existing ML code for prediction. More specifically, in our paper and associated Github repositories we provide documentation and software in Python and R for four flexible machine-learning methods for CDE (optimized with respect to the CDE loss in Section 2.2):111All of these methods, except for DeepCDE, have occurred in previously published or archived papers. Hence, we only briefly review the methods in this paper and instead focus on software usage, algorithmic aspects, and the settings under which each method can be applied.

  1. NNKCDE —a simple and easily interpretable nearest-neighbor kernel CDE method (Section 2.1); it computes a kernel density estimate of using the nearest neighbors of the evaluation point . The model has only two tuning parameters which we choose by minimizing the CDE loss on validation data.

  2. RFCDE —a random-forest CDE method (Section 2.2) that partitions the covariate space so as to minimize the CDE loss; like NNKCDE, it computes a kernel density estimate of but with nearest neighbor weightings defined by the location of the evaluation point relative to the leaves in the random forest. RFCDE is well suited for sparse and mixed-type features and does not require the user to specify distances or similarities between data points. Additionally we provide fRFCDE, a random forest implementation that can accommodate functional features by partitioning in the continuous domain of such features.

  3. FlexCode —a CDE method that uses a basis expansion for the univariate response and poses CDE as a series of univariate regression problems (Section 2.3). The main advantage of this method is its flexibility as any regression method can be applied towards CDE, enabling us to tailor our choice of regression method to the intrinsic structure and type of data at hand. Here we focus specifically on FlexCode, based on XGBoost (Chen & Guestrin, 2016), due to its good performance in, e.g., photo- estimation (Schmidt et al., 2019) and scalability to large data sets; we refer to this version of FlexCode as FlexZBoost.

  4. DeepCDE —a fully nonparametric neural-network-based CDE method (Section 2.4). DeepCDE combines the advantages of basis expansions for the univariate response with the flexibility of neural network architectures for, e.g., image covariates and time-series data by adding a linear output layer of basis expansion coefficients that are computed by minimizing the CDE loss.

Our software makes uncertainty quantification straightforward for users of standard open-source machine learning Python packages; NNKCDE, RFCDE, FlexCode share the sklearn (Pedregosa et al., 2011) API (with fit and predict methods), which also makes our methods compatible with sklearn wrapper functions for, e.g., cross-validation and model ensembling. DeepCDE has implementations for both Tensorflow (Abadi et al., 2015) and Pytorch (Paszke et al., 2017), two of the most widely used deep learning frameworks. Finally, FlexCode is essentially a plug-in method where the user can add a regression function of choice.

Performing method assessments for estimated probability density is challenging as indeed we never observe the true conditional probability density, merely samples from the true probability density (observations). Whereas loss functions such as the root-mean-squared error (RMSE) are typically used in regression problems, they are not appropriate for the task of uncertainty quantification of estimated probability densities. In addition to the CDE methods above, we provide cdetools for method assessment and model selection. The package cdetools provides functions for CDE loss functions (defined by Equation 4 in Section 3.1) for parameter tuning and relative comparison of probability density estimates; cdetools also includes diagnostic tools (Section 3.2) such as Highest Posterior Density and Probability Integral Transform values for checking how well the final density estimates fit the data in terms of coverage probabilities.

As mentioned, our goal is not to compare the general performance of the CDE methods above. Indeed, each CDE approach has particular usage for different settings of response dimensionality, feature types, and computational requirements. Table 1 lists some properties of each method.

.

.

MethodComplexity
Method Capacity (# Training Pts) Multivariate Response Functional Covariates Image Covariates NNKCDE Up to (f)RFCDE Up to FlexCode Up to DeepCDE Up to
Table 1: Comparison of CDE methods in terms of training capacity and compatibility with multivariate response and different types of covariates, with capacities estimated based on input with around 100 features and a standard i5/i7/quad-core processor with GB of RAM. Note that less complex methods (such as NNKCDE) tend to be easier to use, easier to interpret, and often perform better in settings with smaller training sets, whereas more complex methods (such as DeepCDE) perform better in settings with larger (representative) training sets.

The organization of the paper is as follows: In Section 2, we introduce tools for conditional density estimation (NNKCDE, RFCDE/fRFCDE, FlexCode, DeepCDE). In Section 3, we describe tools for model selection and diagnostics. Then, in Section 4, we illustrate our CDE and method assessment tools in three different applications: photo- estimation, likelihood-free cosmological inference and spec- estimation. Python and R code usage examples can be found in Appendix A.

2 Tools for Conditional Density Estimation

We start by reviewing the conditional density estimators. In all cases, we leverage or modify powerful ML-based regression or classification methods so as to implement CDE. Tuning parameters are chosen by minimizing the CDE empirical loss in Equation 5 via either cross-validation or the use of validation data.

2.1 Nnkcde

Nearest-Neighbors Kernel CDE (NNKCDE; Izbicki et al. 2017, Freeman et al. 2017) consists of a kernel density estimate of using only the points closest to the evaluation point in covariate space:

(1)

where is a normalized kernel (e.g., a Gaussian function) with bandwidth , is a distance metric, and is the index of the nearest neighbor of . It is a smoother version of the histogram estimator proposed by Cunha et al. (2009), because it approximates the density with a smooth continuous function rather than by binning.

We provide the NNKCDE222https://github.com/tpospisi/nnkcde software in both Python and R (Pospisil & Dalmasso, 2019a), with examples in Section A.1. NNKCDE can accommodate any number of training examples. However, selecting tuning parameters scales quadratically in , the number of nearest neighbors, which prohibits using . Our implementation (Izbicki et al. 2019, Appendix D) is more computationally efficient than standard nearest-neighbor kernel smoothers. For instance, we are able to efficiently evaluate the loss function in Equation 5 on large validation samples by expressing the first integral in terms of convolutions of the kernel function; these have a fast-to-compute closed solution for a Gaussian kernel. We also choose the two tuning parameters and by cross-validation (CV) rather than by hand.

2.2 Rfcde

Introduced by Pospisil & Lee (2018), RFCDE is an extension of random forests (Breiman, 2001) to conditional density estimation and multivariate responses. RFCDE inherits the advantages of random forests in that it can handle mixed-typed data. It has good performance while remaining relatively interpretable. For ease of use in the statistics and astronomy communities, we provide RFCDE333https://github.com/tpospisi/RFCDE in both Python and R, which call a common C++ implementation of the core training functions that can easily be wrapped in other languages (Pospisil & Dalmasso, 2019b).

The main departure from the standard random forest algorithm is our criterion for covariate space partitioning decisions. In regression contexts, the splitting variable and split point are typically chosen so as to minimize the mean-squared error loss. In classification contexts, the splits are typically chosen so as to minimize a classification error. Existing random forest density estimation methods such as quantile regression forests by Meinshausen (2006) and the TPZ algorithm by Carrasco Kind & Brunner (2013) use the same tree structure as regression and classification random forests, respectively. RFCDE, however, builds trees that minimize the CDE loss (see Equation 5), allowing the forest to adapt to structure in the conditional density as opposed to adapting to structure in the conditional mean, overcoming some of the limitations of the usual regression approach for data with heteroskedasticity and multimodality; see Pospisil & Lee (2018) for some examples and comparisons.

Another unique feature of RFCDE is that it can handle multivariate responses with joint densities by applying a weighted kernel smoother to . This added feature enables analysis of complex conditional distributions that describe relationships between multiple responses and covariates, or equivalently between multiple parameters and observables in an LFI setting. Like quantile regression forests, the RFCDE algorithm takes advantage of the fact that random forests can be viewed as a form of adaptive nearest-neighbor method with the aggregated tree structures determining a weighting scheme. This weighting scheme can then be used to estimate the conditional density , as well as the conditional mean and quantiles, as in quantile regression forests (but for CDE-optimized trees). RFCDE computes the latter density by a weighted kernel density estimate (KDE) in using training points near the evaluation point . These distances are effectively defined by how often a training point belongs to the same leaf node as in the forest (see Equation 1 in Pospisil & Lee 2018 for details).

Despite the increased complexity of our CDE trees, RFCDE still scales to large data sets. While we use kernel density estimates for predictions on new observations, we do not use KDE when evaluating splits, as the calculations of the loss would then require evaluating pairwise distances between all training points. For fast computations, we instead take advantage of orthogonal series to compute density estimates for the partitioning of the covariate space. The orthogonal series estimator for the CDE loss (Equation 5) reduces to a formula that is much faster to evaluate for each split than KDE. For multivariate responses, RFCDE extends the same partitioning procedure by using a tensor basis for where . Similarly, RFCDE extends the density estimates on new to the multivariate case through the use of multivariate kernel density estimators (Epanechnikov, 1969). In both the univariate and multivariate cases, bandwidth selection can be handled by either plug-in estimators or by tuning using a density estimation loss.

Remark: Estimating a CDE loss is an inherently harder task than calculating the mean squared error (MSE) in regression. As a consequence, RFCDE might not provide meaningful tree splits in settings with a large number of noisy covariates. In such settings, one may benefit from combining a regular random forest tree structure (optimized for regression) with a weighted kernel density estimator (for the density calculation). See Section 4.3 for an application to functional data along with software implementation.444Available at https://github.com/Mr8ND/cdetools_applications/spec_z/

2.2.1 fRFCDE

RFCDE also has a functional variant — fRFCDE (Pospisil & Lee, 2019) — which can efficiently handle functional inputs that vary over a continuum. The spectral energy distribution (SED) of a galaxy, which plots energy versus continuous wavelength of light, is an example of functional data. Another example of functional data is the shear correlation function of weak lensing, which measures the mean product of the shear at two points as a function of a continuous distance between those points. Treating functional covariates (like spectra, correlation functions or images) as unordered multivariate vectors on a grid suffers from a curse of dimensionality. As the resolution of the grid becomes finer the dimensionality of the data increases but little additional information is added, due to high correlation between nearby grid points. fRFCDE adapts to this setting by partitioning the domain of each functional covariate (or curve) into intervals, and passing the mean values of the function in each interval as input to RFCDE. Feature selection is then effectively done over regions of the domain rather than over single points. More specifically, the partitioning in fRFCDE is governed by the parameter of a Poisson process, with each functional covariate entering as a high-dimensional vector . Starting with the first element of the vector, we group the first elements together. We then repeat the procedure sequentially until we have assigned all elements into a group; this effectively partitions the function domain into disjoint intervals . The function mean values of each interval are finally treated as new inputs to a standard (vectorial) RFCDE tree. The splitting is done independently for each tree in the forest. Other steps of fRFCDE, such as the computation of variable importance, also proceed as in (vectorial) RFCDE but with the averaged values of a region as inputs. As a result, fRFCDE has the capability of identifying the functional inputs and the regions in the input domain that are important for estimating the response .

The fRFCDE method is as scalable as standard random forests, accommodating training sets on the order of . As the examples in Section 4.3 show, we can obtain substantial gains with a functional approach, both in terms of statistical performance (that is, CDE loss) and computational time. In addition, the change in code syntax is minimal, as one only needs to pass the parameter as the flambda argument during the forest initialization. Examples in Python and R are provided in Appendix A.2.1.

2.3 FlexCode

Introduced by Izbicki & Lee (2017), FlexCode555R implementation: https://github.com/rizbicki/FlexCoDE, Python implementation:https://github.com/tpospisi/FlexCode reframes CDE as a regression using an orthogonal series representation of the conditional density. As an example, may represent an orthonormal basis for (e.g., square-integrable) functions of , like a Fourier or wavelet basis. The key idea of FlexCode is to express the unknown density as a basis expansion:

(2)

By the orthogonality property of the basis, the (unknown) expansion coefficients are then just conditional means. More precisely, . Thus, we can estimate these coefficients using a training set of data by regressing the transformed response variables on predictors for every basis function , where the number of basis functions is chosen by minimizing a CDE loss function on validation data. The estimated density, , may contain small spurious bumps induced by the Fourier approximation and may not integrate to one. We remove such artifacts as described in Izbicki & Lee (2016) by applying a thresholding parameter chosen via cross-validation. FlexCode turns a challenging density estimation problem into a simpler regression problem, where we can choose any regression method that fits the problem at hand.

To provide a concrete example, for high-dimensional (such as galaxy spectra) we can use methods such as sparse or spectral series (“manifold”) regression methods (Tibshirani, 1996; Ravikumar et al., 2009; Lee & Izbicki, 2016); see Section 4.3 for an example with FlexCode-Series. For multi-probe studies with mixed data types, we can use random forests regression (Breiman, 2001). On the other hand, large-scale photometric galaxy surveys such as LSST require methods that can work with data from millions, if not billions, of galaxies. Schmidt et al. (2019) present the results of an initial study of the LSST Dark Energy Science Collaboration (LSST-DESC). Their initial data challenge (“Photo- DC 1”) compares the CDEs of a dozen photo- codes run on simulations of LSST galaxy photometry catalogues in the presence of complete, correct, and representative training data. FlexZBoost — a version of FlexCode based on the scalable gradient boosting regression technique by Chen & Guestrin (2016) — was entered into the data challenge because of the method’s ability to scale to massive data. In the DC1 analysis, FlexZBoost was among the the strongest performing codes according to established performance metrics of such PDFs and was the only code to beat the experimental control under a more discriminating metric, the CDE loss.

For massive surveys such as LSST, FlexCode also has another advantage compared to other CDE methods, namely its compact, lossless storage format. Juric et al. (2017) establishes that LSST has allocated 100 floating point numbers to quantify the redshift of each galaxy. As is shown in Schmidt et al. (2019), the myriad methods for deriving photo- PDFs yield radically different results, motivating a desire to store the results of more than one algorithm in the absence of an obvious best choice. For the photo- PDFs of most codes, one may need to seek a clever storage parameterization to meet LSST’s constraints (Carrasco Kind & Brunner, 2014; Malz et al., 2018), but FlexCode is virtually immune to this restriction. Since FlexCode relies on a basis expansion, one only needs to store coefficients per target density for a lossless compression of the estimated PDF with no need for binning. Indeed, for DC1, we can with FlexZBoost reconstruct our estimate at any resolution from estimates of the first 35 coefficients in a Fourier basis expansion. In other words, FlexZBoost enables the creation and storage of high-resolution photo- catalogs for several billion galaxies at no added cost.

Our public implementation of FlexCode— available in both Python (Pospisil et al., 2019) and R (Izbicki & Pospisil, 2019), respectively — cross-validates over regression tuning parameters (such as the number of nearest neighbors in FlexCode-kNN) and computes the FlexCode coefficients in parallel for further time savings. The computational complexity of FlexCode will be the same as parallel individual regressions. In particular, the scalability of FlexCode is determined by the underlying regression method. A scalable method like XGBoost leads to scalable FlexCode fitting. In the Python version of the code, the user can choose between the following regression methods: XGBoost for FlexZBoost  but also nearest neighbors, lasso (Tibshirani, 1996) and random forests regression (Breiman, 2001). In the R version, the following choices are available: XGBoost, nearest neighbors, lasso, random forests, Nadaraya-Watson kernel smoothing (Nadaraya, 1964; Watson, 1964), sparse additive models (Ravikumar et al., 2009), and spectral series (“manifold”) regression (Lee & Izbicki, 2016). In both implementations, the user may also use a custom regression method; we illustrate how this can be done with a vignette in both packages. For the Python implementation, the user can add any custom regression method following the sklearn API, i.e., with fit and predict methods. Examples in both languages are presented in Appendix A.3.

2.4 DeepCDE

Recently, neural networks have reemerged as a powerful tool for prediction settings where large amounts of representative training data are available; see LeCun et al. (2015) and Goodfellow et al. (2016) for a full review. Neural networks for CDE, such as Mixture Density Networks (MDNs; Bishop 1994) and variational methods (Tang & Salakhutdinov, 2013; Sohn et al., 2015), usually assume a Gaussian or some other parametric form of the conditional density. More recently, MDNs have also been used for photometric redshift estimation (D’Isanto & Polsterer, 2018; Pasquet et al., 2019) and for direct estimation of likelihoods and posteriors in cosmological parameter inference (see Alsing et al. 2019 and references within).

Our DeepCDE666https://github.com/tpospisi/DeepCDE (Dalmasso & Pospisil, 2019) takes a different, fully nonparametric approach and is based on the orthogonal series representation in FlexCode, given in Equation 2. However, rather than relying on regression methods to estimate the expansion coefficients in Equation 2, DeepCDE computes the coefficients jointly with a neural network that minimizes the CDE loss in Equation 4. Indeed, one can show that for an orthogonal basis, the problem of minimizing this CDE loss is (asymptotically) equivalent to finding the best basis coefficients in FlexCode under mean squared error loss for the individual regressions; see Appendix C for a proof. The value of this result is that DeepCDE with a CDE loss directly connects prediction with uncertainty quantification, implying that one can leverage the state-of-the-art deep architectures for an application at hand toward uncertainty quantification for the same prediction setting.

From a neural network architecture perspective, DeepCDE only adds a linear output layer of coefficients for a series expansion of the density according to

(3)

where is an orthogonal basis for functions of . Like FlexCode, we normalize and remove spurious bumps from the final density estimates according to the procedure in Section 2.2 of Izbicki & Lee (2016).

The greatest benefit of DeepCDE is that it can be implemented with both convolutional and recurrent neural network architectures, extending to both image and sequential data. For most deep architectures, adding a linear layer represents a small modification, and a negligible increase in the number of parameters. For instance, with the AlexNet architecture (Krizhevsky et al., 2012), one of the smallest convolutional networks, adding a final layer with 30 coefficients for a cosine basis only adds extra parameters. This represents a increase over the million already existing parameters, and hence a neglible increase in training and prediction time. Moreover, the CDE loss for DeepCDE is especially easy to evaluate; see Appendix C for details.

We provide both TensorFlow and Pytorch implementations of DeepCDE (Dalmasso & Pospisil, 2019). We also include examples that shows how one can easily build DeepCDE on top of AlexNet (Krizhevsky et al., 2012); in this case, for the task of estimating the probability distribution of the correct orientation of a color image . Note that Fischer et al. (2015) use AlexNet for the corresponding regression task of predicting the orientation for a color image (without uncertainty quantification).

3 Tools for Method Assessment

After fitting CDEs, it is important to assess the quality of our models of uncertainty. For instance, after computing photo- PDF estimates for some galaxies, one may ask whether these estimates accurately quantify the true uncertainty distributions . Similarly, in the LFI task, a key question is whether an estimate of the posterior distribution, of the cosmological parameters is close enough to the true posterior given the observations .

We present two method-assessment tools suitable to different situations, which are complementary and can be performed simultaneously, and provide public implementations in the cdetools777https://github.com/tpospisi/cdetools package in both Python and R (Dalmasso et al., 2019). First, we describe a CDE loss function that directly provides relative comparisons between conditional density estimators, such as the methods presented in Section 2 or, equivalently, between a set of models (for the same method) with different tuning parameters. Second, we describe visual diagnostic tools, such as Probability Integral Transforms (PIT) and Highest Probability Density (HPD) plots, that can provide insights on the absolute fit quality of a given estimator to observed data.

3.1 CDE loss

Here we briefly review the CDE loss from Izbicki & Lee (2016) for assessing conditional density estimators and discuss it in the context of the cosmology LFI case.

The goal of a loss function is to provide relative comparisons between different estimators, so that it is easy to directly choose the best fitted model among a list of candidates. Given an estimate of , we define the CDE loss by

(4)

where is the marginal distribution of the covariates . Our CDE loss is similar to the integrated squared error (ISE) loss for density estimation except weighted by the marginal distribution of the covariates. The weighting by the marginal emphasizes that errors in the estimation of for unlikely covariates are less important. The CDE loss can be estimated (up to a constant) by

(5)

where represents our validation data, i.e., a held-out set not used to construct .

In our implementation, the function cde_loss returns the estimated CDE loss as well as an estimate of the standard deviation or the standard error (SE) of the empirical loss.

3.1.1 CDE loss for LFI

Observational cosmology presents a challenge for the loss function’s marginalization over all possible data ; as we have only one instantiation of the universe, the LFI setting requires a modified loss function for the posterior distribution of the cosmological parameters at a single instance of data . The CDE loss we would ideally aim to minimize is the integrated squared error (ISE) loss

(6)

for the conditional density at , which is difficult to estimate with only one observation . Hence, Izbicki et al. (2019) approximate the above loss with a surrogate loss

(7)

where is the marginal distribution of restricted to an -neighborhood of . As , the surrogate loss approaches the ISE for the true posterior. The choice of serves a bias-variance trade-off, as smaller results in less bias in the estimate of the loss, while restricting the -neighborhood to fewer validation data points at the same time leads to higher variance.

In an LFI setting, we typically create training and validation data for our estimators by first simulating pairs using a stochastic forward model, then applying an ABC algorithm that rejects all data outside an -neighborhood, and finally splitting the ABC sample into training and validation sets. As an ABC sample includes cosmological parameters that are likely to have generated the observation , we effectively estimate the surrogate loss by just computing the empirical CDE loss in Equation 5 at the observed data with distance cut-off . In Section 4.2, we compute the loss on forward-simulated cosmological data for some different ABC cut-offs but with translated to acceptance rates ( or no ABC cut-off is equivalent to accepting all data in the simulated ABC sample, and the smaller is, the lower the proportion of sample points that fall within the specified neighborhood).

3.2 PIT and HPD values

The CDE loss function is a relative measure of performance that cannot address absolute goodness-of-fit. To quantify overall goodness-of-fit, we examine how well calibrated an ensemble of conditional density estimators are on average.

Given a true probability density of a variable conditioned on data , an estimated probability density cannot be well-calibrated unless . Built on the same logic, the probability integral transform (PIT; Polsterer et al. 2016)

(8)

assesses the calibration quality of an ensemble of CDEs for scalar representing the cumulative distribution function (CDF) of evaluated at . A statistically self-consistent population of densities has a uniform distribution of PIT values, and deviations from uniformity indicate inaccuracies of the estimated PDFs. Overly broad CDEs manifest as under-representation of the lowest and highest PIT values, whereas overly narrow CDEs manifest as over-representation of the lowest and highest values.

However, PIT values do not easily generalize to multiple responses. For instance, for a bivariate response , the quantity is not in general uniformly distributed (Genest & Rivest, 2001). The highest probability density (HPD; Izbicki et al. 2017, Appendix A)

(9)

can be seen as a PIT equivalent for multivariate , and is easy to compute in practice. The HPD value is based on the definition of the highest density region (HDR) of a random variable ; that is, the subset of the sample space of where all points in the region have a probability above a certain value. The HDR of can be seen as an interval estimate of y when is observed. The set is the smallest HDR containing the point , and the HPD value in Equation 9 is the probability or “confidence level” of such a region (Hyndman, 1996). Similar to PIT values, HPD values for validation data follow a distribution if the CDEs are well calibrated on the population level. In our implementation the functions pit_coverage(cde_test, y_grid, y_test) and hpd_coverage(cde_test, y_grid, y_test), respectively, calculate PIT and HPD values for CDE estimates cde_test using the grid y_grid with observed values y_test.

The PIT and HPD are not without their limitations, however, as demonstrated in the control case of Schmidt et al. (2019) and Figure 1 of Section 4.1 here. Because the PIT and HPD values can be uniformly distributed even if is not well estimated, they must be used in conjunction with loss functions for method assessment. A popular way of visualizing PIT and HPD diagnostics is through coverage plots, sometimes referred to as probability-probability plots or P-P plots of the empirical of the test data as a function of the theoretical corresponding to the PIT or HPD.

Figure 1 (c)-(d) contains example coverage plots for the PIT and HPD values of photo- estimators, where the empirical (observed) coverage is close to the theoretical (ideal) coverage.

4 Examples

We demonstrate888Code for these examples is publicly available at https://github.com/Mr8ND/cdetools_applications. the breadth of our CDE methods in three different astronomical use cases.

  1. Photo- estimation: univariate response with multivariate input. This is the standard prediction setting in which all four methods apply. In Section 4.1, we apply the methods to the Teddy photometric redshift data (Beck et al., 2017), and illustrate the need for loss functions to properly assess PDF estimates of redshift given photometric colors .

  2. Likelihood-free cosmological inference: multivariate response. For multiple response components we want to model the often complicated dependencies between these components; this is in contrast to approaches which model each component separately, implicitly introducing an assumption of conditional independence. In Section 4.2, we use an example of LFI for simulated weak lensing shear data to show how NNKCDE and RFCDE can capture more challenging bivariate distributions with curved structures; in this toy example represents cosmological parameters in the CDM-model, and represents (coarsely binned) weak lensing shear correlation functions.

  3. Spec- estimation: functional input. Standard prediction methods, such as random forests, do not typically fare well with functional covariates, simply treating them as unordered vectorial data and ignoring the functional structure. However, often there are substantial benefits to explicitly taking advantage of that structure as in fRFCDE. In Section 4.3, we compare the performance of a vectorial implementation of RFCDE with fRFCDE and FlexCode-Series for a spectroscopic sample from the Sloan Digital Sky Survey (SDSS; Alam et al. 2015), where the input is a high-resolution spectrum of a galaxy, and the response the galaxy’s redshift .

4.1 Photo- estimation: Univariate response with multivariate input

Here we estimate photo- posterior PDFs using representative training and test data (Samples A and B, respectively) from the Teddy catalog by Beck et al. (2017)999Data available at https://github.com/COINtoolbox/photoz_catalogues, comprised of 74309 training and 74557 test observations. Each observation has five features (covariates): the magnitude of the -band and the pairwise differences (colors) , , , and . For the purposes of this paper, we do not examine the problem of disparity in color-space coverage, which is a topic by itself.

We fit NNKCDE, RFCDE, DeepCDE, and FlexZBoost to the data. In addition we fit an RFCDE model, “RFCDE-Limited,” restricted to the first three of the five covariates, as a toy model which fails to extract some information from the covariates, allowing us to showcase the difference between coverage diagnostics and the CDE loss function. For DeepCDE we use a three-layer deep neural network with linear layers and reLu activations with neurons per layers, trained for epochs with Adam (Kingma & Ba, 2014). As our goal is to showcase its applicability we do not optimize the neural architecture (e.g., number of layers, number of neurons per layer, activation functions) nor the learning parameters (i.e., number of epochs, learning rate, momentum). To illustrate our validation methods, we also include the marginal distribution as an estimate of individual photo- distributions . This estimate will be the same regardless of .

Figure 1: (a)-(b) Histograms of PIT and HPD values for Teddy photo- data. Both versions of RFCDE as well as the marginal distribution have a uniform distribution indicating good coverage. FlexZBoost exhibits an overrepresentation of 0 and 1 values which indicates overly narrow CDEs. (c)-(d) Coverage plots of HPD and PIT values. The plots in the last row are often just referred to as Probability-Probability or P-P plots; as before these diagnostics indicate a good fit for all methods, including the clearly misspecified “Marginal” model.
Method CDE Loss (with SE) Storage (single CDE) Marginal -3.192 (0.007) 200 floats (1.6 KB) RFCDE-Limited -11.038 (0.047) 200 floats (1.6 KB) NNKCDE -12.139 (0.043) 200 floats (1.6 KB) FlexZBoost -12.272 (0.044) 30 coeffs (0.24 KB) DeepCDE -12.821 (0.04) 31 coeffs (0.25 KB) RFCDE -13.108 (0.052) 200 floats (1.6 KB)
Figure 2: Method comparison via the CDE loss of Equation 5 with estimated standard error (SE) and the storage space for each galaxy’s photo- CDE. As FlexZBoost and DeepCDE are basis expansion methods, we need only to store the estimated coefficients for a lossless compression of the CDEs; the other CDEs are discretized to 200 bins.

Figure 2 presents the estimated CDE loss of Equation 5 with estimated standard error (SE), as well as the storage space for each galaxy’s photo- CDE. Figure 1 shows that the different models, including the clearly misspecified “Marginal” model, achieve comparable performance on goodness-of-fit diagnostics. However, Figure 2 shows the discriminatory power of the CDE loss function, which distinguishes the methods from one another. This emphasizes the need for method comparison through loss functions in addition to goodness-of-fit diagnostics. We note that the ranking in CDE loss also correlates roughly with the quality of the point estimates, as shown in Appendix B.

4.2 Likelihood-free cosmological inference: Multivariate response

To showcase the ability to target joint distributions, we apply NNKCDE and RFCDE to the problem of estimating multivariate posteriors of the cosmological parameters in a likelihood-free setting via ABC-CDE (Izbicki et al., 2019). We also use this example to illustrate the usage of the CDE surrogate loss.

ABC is a method for parameter inference in settings where the likelihood is not tractable but we have a forward model that can simulate data under fixed parameter settings . The simplest form of ABC is the ABC rejection sampling algorithm, where a set of parameters is first drawn from a prior distribution. The simulated data is accepted with tolerance if for some distance metric (e.g., the Euclidean distance) that measures the discrepancy between the simulated data and observed data . The outcome of the ABC rejection algorithm for small enough is a sample of parameter values approximately distributed according to the desired posterior distribution .

The basic idea of ABC-CDE is to improve the ABC estimate — and hence reduce the number of required simulations — by using the ABC sample/output as input to a CDE method tuned with the CDE loss in Equation 3.1.1. Hence, in ABC-CDE, our CDE method can be seen as a post-adjustment method: it returns an estimate which we evaluate at the point to obtain a more accurate approximation of .

In this example, we consider the problem of cosmological parameter inference via cosmic shear, caused by weak gravitational lensing inducing a distortion in images of distant galaxies. The size and direction of the distortion is directly related to the size and shape of the matter distribution along the line of sight, which varies across the universe. We use shear correlation functions to constrain the dark matter density and matter power spectrum normalization parameters of the CDM cosmological model, which predicts the properties and evolution of the large scale structure of the matter distribution. For further background see Hoekstra & Jain (2008), Munshi et al. (2008) and Mandelbaum (2017). Here we use the GalSim101010https://github.com/GalSim-developers/GalSim toolkit (Rowe et al., 2015) to generate simplified galaxy shears distributed according to a Gaussian random field determined by . The binned shear correlation functions serve as our input data or summary statistics . For the inference, we assume uniform priors and and fix , , .

Figure 3: Cosmological parameter inference in an LFI setting with simulated weak lensing data (left) and respective HPD values (right). The top row shows the estimated bivariate distribution of and for ABC rejection sampling at different acceptance rates (20%, 50%, and 100%). The middle row shows the estimated posterior densities after applying NNKCDE to the ABC sample, and the bottom row when applying RFCDE. Both NNKCDE and RFCDE tuned with a CDE loss improve on ABC across acceptance rate levels, i.e., they provide approximate posteriors that are more concentrated around the degeneracy curve on which the data are indistinguishable (shown here as a dashed line). We even see some structure at an ABC acceptance rate of 1 (right column); that is, at an ABC threshold of for which the entire sample is accepted by ABC and passed to our CDE code. HPD values were not included as they fail to distinguish between conditional and marginal distribution, as mentioned above.

The top row of Figure 3 shows the estimated bivariate posterior distribution of from ABC rejection sampling only at varying acceptance rates (20%, 50%, and 100%). An acceptance rate of 100% just returns the (uniform) ABC prior distribution, whereas the ABC posteriors for smaller acceptance rates (that is, smaller values of ) concentrate around the parameter degeneracy curve (shown as a dashed line) on which the data are indistinguishable. The second and third rows show the estimated posteriors when we, respectively, improve the initial ABC estimate by applying NNKCDE and RFCDE tuned with our CDE surrogate loss. A result that is apparent from the figure is that NNKCDE and RFCDE fitted with a CDE loss are able to capture the degeneracy curve at a larger acceptance rate, that is, for a smaller number of simulations, than when using ABC only.

CDE Loss (with SE) ()
Method \Acceptance Rate 20% 50% 100%
ABC -0.686 (0.009) -0.392 (0.004) -0.227 (0.001)
NNKCDE -1.652 (0.022) -1.199 (0.016) -0.844 (0.010)
RFCDE -4.129 (0.063) -3.698 (0.064) -2.817 (0.055)
Table 2: Performance of ABC, NNKCDE, RFCDE in LFI settings with simulated weak lensing data in terms of the surrogate CDE loss.

We also have a direct measure of performance via the CDE loss and can adapt to different types of data by leveraging different CDE codes. Table 2 shows how the methods compare in terms of the surrogate loss. While decreasing the acceptance rate benefits all methods, it is clear that CDE-based approaches have better performance in all cases.

4.3 Spec- estimation: Functional input

In this example, we compare CDE methods in the context of spectroscopic redshift determination using 2812 high-resolution spectra from the Sloan Digital Sky Survey (SDSS) Data Release 6 (preprocessed with the cuts of Richards et al. 2009), corresponding to covariates of flux measurements at wavelengths. The high-resolution spectra can be seen as functional inputs on a continuum of wavelength values. Spectroscopic redshifts (or redshifts predicted from spectra ) tend to be both accurate and precise, so the density is well-approximated by a delta function at the true redshift. For the purposes of illustrating the use of the CDE codes, we define a noisified redshift , where are independent and identically distributed variables drawn from a normal distribution and is the true redshift of galaxy provided by SDSS. Thus the conditional density of this example is a Gaussian distribution with mean and variance 0.02.

We compare fRFCDE, a “Functional” adaptation of RFCDE, with a standard “Vector” implementation of RFCDE, which treats functional data as a vector. For completeness, we also compare against standard regression random forest combined with KDE, as well as FlexCode-Spec, an extension of FlexCode with a Spectral Series regression (Richards et al., 2009; Freeman et al., 2009; Lee & Izbicki, 2016) for determining the expansion coefficients. We train on 2000 galaxies and test on the remaining galaxies.The RFCDE and the fRFCDE trees are both trained with , , and bandwidths chosen by plug-in estimators. We use for the fRFCDE rate parameter of the Poisson process that defines the variable groupings.

Method Train Time (in sec) CDE Loss (with SE)
Functional RFCDE 24.89 -3.38 (0.155)
Vector RFCDE 41.60 -2.52 (0.104)
Regression RF + KDE 50.63 -3.42 (0.120)
FlexCode-Spec 4017.93 -3.53 (0.136)
Table 3: Performance of RFCDE for functional data. We achieve both a lower CDE loss and computational time for Functional RFCDE(as compared to Vector RFCDE) by leveraging the functional nature of the data.

Table 3 contains the CDE loss and train time for both the vector-based and functional-based RFCDE models on the SDSS data, as well as for FlexCode-Spec. We obtain substantial gains when incorporating functional features both in terms of CDE loss and computational time. The computational gains are attributed to requiring fewer searches for each split point as the default value of is reduced. As anticipated, vector-based RFCDE underperforms with functional data due to the tree splits on CDE loss struggling to pick up signals in the data and returning almost the same conditional density regardless of the input. In contrast, splitting on mean squared error is an easier task and we include the results of regular regression random forest and KDE, which are comparable to the results of fRFCDE. Whereas random forests rely on variable selection (either individual variables as in vectorial RFCDE or grouped together as in fRFCDE) for dimension reduction, FlexCode-Spec is based on the Spectral Series mechanism for dimension reduction that finds (potentially) nonlinear, sparse structure in the data distribution (Izbicki & Lee, 2016). Both types of dimension reduction are reasonable for spec- estimation: Particular wavelength locations or regions in the galaxy SED could carry information of the galaxy’s true redshift; RFCDE and fRFCDE are effective in finding such locations by variable selection. Spectral Series on the other hand are able to recover low-dimensional manifold structure in the entire data ensemble; as Richards et al. 2009 show, the main direction of variation in SDSS spectra is directly related to the spectroscopic redshift.

5 Conclusions

This paper presents statistical tools and software for uncertainty quantification in complex regression and parameter inference tasks. Given a set of features with associated response , our methods extend the usual point prediction task of classification and regression to nonparametric estimation of the entire (conditional) probability density . The described CDE methods are meant to handle a range of different data settings, and we have provided examples of code usage in the contexts of photo- estimation, likelihood-free inference for cosmology analysis, and spec-z estimation. In addition, we have provided tools that can be used for CDE method assessment and for choosing tuning parameters of the CDE methods in a principled way.

Our software is composed of four packages for CDE, NNKCDE, RFCDE, FlexCode and DeepCDE, each using a different machine learning algorithm, and one for model assessment, cdetools. All packages are implemented in both Python and R, with no dependence on proprietary software, making them compatible with any operating system supporting either language (e.g., Windows, MacOS, Ubuntu and Unix-based OS). The code is provided in publicly available Github repositories, documented using Python and R function documentation standards, and equipped with Travis CI111111https://docs.travis-ci.com/ automatic bug-tracking. Our software makes uncertainty quantification straightforward for those used to standard open-source machine learning Python packages. NNKCDE, RFCDE, FlexCode share the sklearn API (with fit and predict methods), which also makes our methods compatible with sklearn wrapper functions (for e.g., cross-validation and model ensembling). DeepCDE has implementations for both Tensorflow and Pytorch, two of the most widely used deep learning frameworks, and can easily be combined with most existing network architectures.

Acknowledgements

ND and ABL are partially supported by the National Science Foundation under Grant Nos. DMS1521786. RI is supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (grant number 306943/2017-4) and Fundação de Amparo à Pesquisa do Estado de São Paulo (grant number 2017/03363-8). AIM acknowledges support from the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research. During the completion of this work, AIM was advised by David W. Hogg and supported in part by National Science Foundation grant AST-1517237.

References

Appendix A Examples of code usage

In this section we provide examples of code usage in Python and R. To save space, we have included the DeepCDE examples directly in the code release121212https://github.com/tpospisi/DeepCDE/tree/master/model_examples (Dalmasso & Pospisil, 2019) rather than in this section.

a.1 Nnkcde

  {Verbatim}[commandchars=
{},codes=] import numpy as np import nnkcde

# Fit the model model = nnkcde.NNKCDE() model.fit(x_train, y_train)

# Tune parameters: bandwidth, number of neighbors k # Pick best loss on validation data k_choices = [5, 10, 100, 1000] bandwith_choices = [0.01, 0.05, 0.1] model.tune(x_validation, y_validation, k_choices, bandwidth_choices)

# Predict new densities on grid y_grid = np.linspace(y_min, y_max, n_grid) cde_test = model.predict(x_test, y_grid)  

  {Verbatim}[commandchars=
{},codes=] library(NNKCDE)

# Fit the model model <- NNKCDE::NNKCDE$new(x_train, y_train)

# Tune parameters: bandwidth, number of neighbors k # Pick best loss on validation data k_choices <- c(5, 10, 100, 1000) bandwith_choices <- c(0.01, 0.05, 0.1) model$tune(x_validation, y_validation, k_grid = k_choices, h_grid = bandwidth_choices)

# Predict new densities on grid y_grid <- seq(y_min, y_max, length.out = n_grid) cde_test <- model$predict(x_test, y_grid)  


a.2 Rfcde

  {Verbatim}[commandchars=
{},codes=] import numpy as np import rfcde

# Parameters n_trees = 1000 # Number of trees in the forest mtry = 4 # Number of variables to potentially split at in each node node_size = 20 # Smallest node size n_basis = 15 # Number of basis functions bandwidth = 0.2 # Kernel bandwith - used for prediction only

# Fit the model forest = rfcde.RFCDE(n_trees=n_trees, mtry=mtry, node_size=node_size, n_basis=n_basis) forest.train(x_train, y_train)

# Predict new densities on grid y_grid = np.linspace(0, 1, n_grid) cde_test = forest.predict(x_test, y_grid, bandwidth)

# Predict conditional means for CDE-optimized forest cond_mean_test = forest.predict_mean(x_test)

# Predict quantiles (i.e., quantile regression) for CDE-optimized forest alpha_quantile = .90 quant_test = forest.predict_quantile(x_test, alpha_quantile)  

  {Verbatim}[commandchars=
{},codes=] library(RFCDE)

# Parameters n_trees <- 1000 # Number of trees in the forest mtry <- 4 # Number of variables to potentially split at in each node node_size <- 20 # Smallest node size n_basis <- 15 # Number of basis functions bandwidth <- 0.2 # Kernel bandwith - used for prediction only

# Fit the model forest <- RFCDE::RFCDE(x_train, y_train, n_trees = n_trees, mtry = mtry, node_size = node_size, n_basis = n_basis)

# Predict new densities on grid y_grid <- seq(0, 1, length.out = n_grid) cde_test <- predict(forest, x_test, y_grid, response = ’CDE’, bandwidth = bandwidth)

# Predict conditional means for CDE-optimized forest cond_mean_test <- predict(forest, x_test, y_grid, response = ’mean’, bandwidth = bandwidth)

# Predict quantiles (i.e., quantile regression) for CDE-optimized forest alpha_quantile <- .90 quant_test <- predict(forest, x_test, y_grid, response = ’quantile’, quantile = alpha_quantile, bandwidth = bandwidth)  


a.2.1 fRFCDE

  {Verbatim}[commandchars=
{},codes=] import numpy as np import rfcde

# Parameters n_trees = 1000 # Number of trees in the forest mtry = 4 # Number of variables to potentially split at in each node node_size = 20 # Smallest node size n_basis = 15 # Number of basis functions bandwidth = 0.2 # Kernel bandwith - used for prediction only lambda_param = 10 # Poisson Process parameter

# Fit the model functional_forest = rfcde.RFCDE(n_trees=n_trees, mtry=mtry, node_size=node_size, n_basis=n_basis) functional_forest.train(x_train, y_train, flamba=lambda_param)

# … Same as RFCDE for prediction …  

  {Verbatim}[commandchars=
{},codes=] library(RFCDE)

# Parameters n_trees <- 1000 # Number of trees in the forest mtry <- 4 # Number of variables to potentially split at in each node node_size <- 20 # Smallest node size n_basis <- 15 # Number of basis functions bandwidth <- 0.2 # Kernel bandwith - used for prediction only lambda_param <- 10 # Poisson Process parameter

# Fit the model functional_forest <- RFCDE::RFCDE(x_train, y_train, n_trees = n_trees, mtry = mtry, node_size = node_size, n_basis = n_basis, flambda = lambda_param)

# … Same as RFCDE for prediction …  


a.3 FlexCode

  {Verbatim}[commandchars=
{},codes=] import numpy as np import flexcode

# Select regression method from flexcode.regression_models import NN

# Parameters basis_system = "cosine" # Basis system max_basis = 31 # Maximum number of basis. If the model is not tuned, # max_basis is set as number of basis

# Regression Parameters # If a list is passed for any parameter automatic 5-fold CV is used to # determine the best parameter combination. params = {"k": [5, 10, 15, 20]} # A dictionary with method-specific regression parameters.

# Parameterize model model = flexcode.FlexCodeModel(NN, max_basis, basis_system, regression_params=params)

# Fit model - this will also choose the optimal number of neighbors ‘k‘ model.fit(x_train, y_train)

# Tune model - Select the best number of basis model.tune(x_validation, y_validation)

# Predict new densities on grid cde_test, y_grid = model.predict(x_test, n_grid=n_grid)  

  {Verbatim}[commandchars=
{},codes=] library(FlexCoDE)

# Parameters basis_system <- "Cosine" # Basis system max_basis <- 31 # Maximum number of basis.

# Fit and tune FlexCode via KNN regression using 4 cores fit <- FlexCoDE::fitFlexCoDE(x_train, y_train, x_validation, y_validation, nIMax = max_basis, regressionFunction.extra = list(nCores = 4), regressionFunction = FlexCoDE::regressionFunction.NN, system = basis_system)

# Predict new densities on a grid n_points_grid <- 500 # Number of points in the grid cde_test <- predict(fit, x_test, B = n_points_grid)

## Shortcut for FlexZBoost

# Fit and tune FlexZBoost fit <- FlexCoDE::FlexZBoost(x_train, y_train, x_validation, y_validation, nIMax = max_basis, system = basis_system)

# Predict new densities on a grid n_points_grid <- 500 # Number of points in the grid cde_test <- predict(fit, x_test, B = n_points_grid)  


a.4 cdetools

  {Verbatim}[commandchars=
{},codes=] import cdetools

cde_test # numpy matrix of conditional density evaluations on a grid # - each row is an observation, each column is a grid point y_grid # the grid at which cde_test is evaluated y_true # the observed y values

# Calculate the cde_loss cde_loss(cde_test, y_grid, y_test)

# Calculate coverage values cdf_coverage(cde_test, y_grid, y_test) # PIT-values hpd_coverage(cde_test, y_grid, y_test) # HPD-values  

  {Verbatim}[commandchars=
{},codes=] library(cde_tools)

cde_test # matrix of conditional density evaluations on a grid # - each row is an observation, each column is a grid point y_grid # the grid at which cde_test is evaluated y_true # the observed y values

# Calculate the cde_loss cde_loss(cde_test, y_grid, y_test)

# Calculate coverage values cdf_coverage(cde_test, y_grid, y_test) # PIT-values hpd_coverage(cde_test, y_grid, y_test) # HPD-values  

Appendix B Point Estimate Photo-Z Redshifts (Univariate Response with Multivariate Data)

Plots of point estimates can sometimes be a useful qualitative diagnostic of photo- code performance in that they may quickly identify a bad model. For example, the marginal model would always lead to the same point estimate and hence a horizontal line in the plot. Figure 4 shows point photo- predictions versus the true redshift for the photo- codes in Example 4.1.

Figure 4: Predicted photometric versus true spectroscopic redshift. Point estimate photo-s are derived from the CDEs by taking the mode of each density. Predictions from the marginal, not shown here, always predicts the mode of the marginal distribution, which here is .

The second moment of the distribution and outlier fraction (OLF) are also commonly used diagnostic tools for photo- estimators. Table 4 lists the , and OLF point estimate metrics from Dahlen et al. (2013). These metrics are defined as follows:

(B1)
(B2)
(B3)

where indicates the spectroscopic redshift and is the difference between the predicted and spectroscopic redshift. According to the point estimate metrics, RFCDE and DeepCDE perform the best on the Teddy data, followed by FlexZBoost and NNKCDE. These results are consistent with the CDE loss rankings in Figure 2.

Method OLF (%)
Marginal 0.0049 0.0740 1.018
RFCDE-Limited 0.0010 0.0205 0.604
NNKCDE 0.0008 0.0180 0.335
FlexZBoost 0.0008 0.0169 0.362
DeepCDE 0.0007 0.0164 0.359
RFCDE 0.0007 0.0169 0.345
Table 4: Performance in photometric redshift predictions for different methods.

Appendix C Loss Function and Model Assumption Equivalence

In this section we connect the DeepCDE loss in Equation 4 to the FlexCode model setup in Equation 2.

lem 1.

Let be the coefficients of the FlexCode basis expansion in Equation 2. Minimizing the CDE loss in Equation 4 is equivalent to minimizing the mean squared errors of the basis expansion coefficients, i.e.,

(C1)
Proof.

We prove the statement by showing that the two minimization problems are equivalent.

First, considering the LHS of Equation C1, we have that:

where the second equality follows from the fact that the set is part of an orthonormal basis, that is,


Next, the RHS of Equation C1 reduces to:

In DeepCDE, we implement an especially simple expression for the loss function by inserting the orthogonal basis expansion (Equation 3) into the CDE loss (Equation 4) to yield

(C2)

The last expression represents our empirical CDE loss on validation data and is easy to compute.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
388189
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description