Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy
Gravitational wave (GW) detection is now commonplace Abbott and others (2016, 2016, 2017) and as the sensitivity of the global network of GW detectors improves, we will observe s of transient GW events per year Veitch et al. (2014). For each of these events the current methods used to estimate their source parameters employ optimally sensitive Searle et al. (2009) but computationally costly Bayesian inference approaches Veitch et al. (2014). For binary black hole (BBH) signals, existing complete GW analyses can take seconds to complete Veitch et al. (2014) and for binary neutron star (BNS) signals this increases by at least an order of magnitude Abbott and others (2017). It is for this latter class of signal (and neutron star black hole (NSBH) systems) that counterpart electromagnetic (EM) signatures are expected, containing prompt emission on timescales of 1 second – 1 minute. The current fastest method for alerting EM followup observers Singer and Price (2016), can provide estimates in minute, on a limited range of key source parameters. Here we show that a conditional variational autoencoder (CVAE) Tonolini et al. (2019); Pagnoni et al. (2018) pretrained on BBH signals and without being given the precomputed posteriors can return Bayesian posterior probability estimates on source parameters. The training procedure need only be performed once for a given prior parameter space (or signal class) and the resulting trained machine can then generate samples describing the posterior distribution orders of magnitude faster than existing techniques.
The problem of detecting GWs has largely been solved through the use of template based matchedfiltering Usman et al. (2016), a process recently replicated using machine learning techniques George and Huerta (2018); Gabbard et al. (2018); Gebhard et al. (2017). Once a GW has been identified through this process, Bayesian inference, known to be the optimal approach Searle et al. (2009), is used to extract information about the source parameters of the detected GW signal.
In the standard Bayesian GW inference approach, we assume a signal and noise model and both may have unknown parameters that we are either interested in inferring or prefer to marginalise away. Each parameter is given a prior astrophysically motivated probability distribution and in the GW case, we typically assume a Gaussian additive noise model (in reality, the data is not truly Gaussian). Given a noisy GW waveform, we would like to find an optimal procedure for inferring some set of the unknown GW parameters. Such a procedure should be able to give us an accurate estimate of the parameters of our observed signal, whilst accounting for the uncertainty arising from the noise in the data.
According to Bayes’ Theorem, a posterior probability distribution on a set of parameters, conditional on the measured data, can be represented as
(1) 
where are the parameters, is the observed data, is the posterior, is the likelihood, and is the prior on the parameters. The constant of proportionality, which we omit here, is , the probability of our data, known as the Bayesian evidence or the marginal likelihood. We typically ignore since it is a constant and for parameter estimation purposes we are only interested in the shape of the posterior.
Due to the size of the parameter space typically encountered in GW parameter estimation and the volume of data analysed, we must stochastically sample the parameter space in order to estimate the posterior. Sampling is done using a variety of techniques including Nested Sampling Skilling (2006); Veitch et al. (2017); Speagle (2019) and Markov chain Monte Carlo methods ForemanMackey et al. (2013); Vousden et al. (2015). The primary software tools used by the advanced Laser Interferometer Gravitational wave Observatory (LIGO) parameter estimation analysis are LALInference and Bilby Veitch et al. (2014); Ashton et al. (2018), which offer multiple sampling methods.
Machine learning has featured prominently in many areas of GW research over the last few years. These techniques have shown to be particularly promising in signal detection George and Huerta (2018); Gabbard et al. (2018); Gebhard et al. (2019), glitch classification George et al. (2017); Zevin et al. (2017) and earthquake prediction Coughlin et al. (2017). We also highlight a recent development in GW parameter estimation (published in parallel and independent to this work) where marginalized one and twodimensional marginalised Bayesian posteriors are produced rapidly using neural networks. This is done without the need to compute the likelihood or posterior during training which is also a characteristic of the approach described in this letter.
Recently, a type of neural network known as CVAE was shown to perform exceptionally well when applied towards computational imaging inference Tonolini et al. (2019); Sohn et al. (2015), text to image inference Yan et al. (2015), highresolution synthetic image generation Nguyen et al. (2016) and the fitting of incomplete heterogeneous data Nazabal et al. (2018). It is this type of machine learning network that we apply in the GW case to accurately approximate the Bayesian posterior .
The construction of a CVAE begins with the definition of a quantity to be minimised (referred to as a cost function). We can relate that aim to that of approximating the posterior distribution by minimising the cross entropy, defined as
(2) 
between the true posterior and , the parametric distribution that we will use neural networks to construct and which we aim to be equal to the true posterior. In this case represents a set of trainable neural network parameters. Starting from this point it is possible to derive a computable form for the crossentropy that is reliant on a set of unknown functions that can be modelled by variational encoder and decoder neural networks. The details of the derivation are described in the methods section and in Tonolini et al. (2019). The final form of the crossentropy loss function is given by the bound
(3) 
and requires three fullyconnected networks; two encoder networks (labelled , in Fig. 1) representing the functions and respectively, and one decoder network (D) representing the function . The function denotes the Kullback–§Leibler (KL) divergence and the variable represents locations within the latent space. This latter object is typically a lower dimensional space within which the encoder networks attempt to represent their input data. In practice, during the training procedure the various integrations that are part of the derivation are approximated by a sum over a batch of training data samples (indexed by above) at each stage of training. Training is performed via a series of steps detailed in the methods section.
We present results on single detector GW test BBH waveforms in simulated advanced detector noise 5 and compare between variants of the existing Bayesian approaches and the CVAE. Posteriors produced by the Bilby inference library Ashton et al. (2018) are used as a benchmark in order to assess the efficiency and quality of our machine learning approach with the existing method for posterior sampling.
For the benchmark analysis we assume that 5 parameters are unknown: the component masses , the luminosity distance , the time of coalescence , and the phase at coalescence . For each parameter we use a uniform prior with ranges and fixed values defined in Table 2. We use a sampling frequency of Hz and a timeseries duration of 1 second. The waveform model used is IMRPhenomPv2 Khan et al. (2018) with a minimum cutoff frequency of Hz. For each input test waveform we run the benchmark analysis using multiple sampling algorithms available within Bilby. For each run and sampler we extract samples from the posterior on the 5 physical parameters.
The CVAE training process used as input waveforms corresponding to parameters drawn from the same priors as assumed for the benchmark analysis. The waveforms are also of identical duration, sampling frequency, and waveform model as used in the benchmark analysis. When each waveform is placed within a training batch it is given a unique detector noise realisation after which the data is whitened using the same advanced detector power spectral density (PSD) Abbott et al. (2016) from which the simulated noise is generated^{1}^{1}1Although we whiten the data as input to our network the whitening is simply to scale the input to a level more suitable to neural networks and need not be performed with the true PSD.. The CVAE posterior results are produced by passing our whitened noisy testing set of GW waveforms as input into the testing path of the pretrained CVAE 1. For each input waveform we sample until we have generated posterior samples on 4 physical parameters (). We choose to output a subset of the full 5dimensional space to demonstrate that parameters (such as in this case) can (if desired) be marginalized out within the CVAE procedure itself, rather than after training.
We can immediately illustrate the accuracy of our machine learning predictions by directly plotting 2 and 1dimensional marginalised posteriors generated using the output samples from our CVAE and Bilby approaches superimposed on each other. We show this for one example test dataset in Fig. 2 where the strong agreement between both Bilby (blue) and the CVAE (red) is clear.
A standard test used within the GW parameter estimation community is the production of socalled pp plots which we show for our analysis in Fig. 3. The plot is constructed by computing a pvalue for each output test posterior on a particular parameter evaluated at the true simulation parameter value (the fraction of posterior samples the simulation value). We then plot the cumulative distribution of these values Veitch et al. (2014). Curves consistent with the black dashed diagonal line indicate that the 1dimensional Bayesian probability distributions are consistent with the frequentist interpretation  that the truth will lie within an interval containing of the posterior probability with a frequency of of the time. It is clear to see that our new approach shows deviations from the diagonal that are entirely consistent with those observed in all benchmark samplers. In Fig. 5 (see Sec. III) we also show distributions of the KL divergence statistic between all samplers on the joint posterior distributions. It is also clear that the KL divergences between VItamin and any other sampler are consistent with the distributions between any 2 existing benchmark samplers.
The dominating computational cost of running VItamin lies in the training time, which can take of order several hours to complete. Completion is determined by comparing posteriors produced by the machine learning model and those of Bilby iteratively during training. We additionally assess whether the cost curves (Fig. 4) have converged, such that their slope is nearzero. We use a single Nvidia Tesla V100 graphics processing units with Gb of RAM although consumer grade “gaming” GPU cards are equally fast for this application.
We stress that once trained, there is no need to retrain the network unless the user wishes to use different priors or assume different noise characteristics. The speed at which posterior samples are generated for all samplers used, including VItamin, is shown in Table 1. Runtime for the benchmark samplers is defined as the time to complete their analyses when configured using their default parameters Ashton et al. (2018). For VItamin this time is defined as the total time to produce samples. For our test case of BBH signals VItamin produces samples from the posterior at a rate which is — orders of magnitude faster than our benchmark analysis using current inference techniques, representing a dramatic speedup in performance.
sampler  run time (seconds)  ratio  

min  max  median  
Dynesty^{2}^{2}2The benchmark samplers all produced samples dependent on the default sampling parameters used.  602  1538  774^{3}^{3}3The reader may note that benchmark sampler run times are a few orders of magnitude lower than what is typical of a complete BBH analysis ( seconds). This is primarily due our use of a reduced parameter space, low sampling rate and choice of sampler hyperparameters.  
Emcee  2005  11927  4351  
Ptemcee  3354  12771  4982  
Cpnest  1431  5405  2287  
VItamin^{4}^{4}4For the VItamin sampler samples are produced as representative of a typical posterior. The run time is independent of the signal content in the data and is therefore constant for all test cases.  1 
In this letter we have demonstrated that we are able to reproduce, to a high degree of accuracy, Bayesian posterior probability distributions generated through machine learning. This is accomplished using CVAEs trained on simulated GW signals and does not require the input of precomputed posterior estimates. We have demonstrated that our neural network model which, when trained, can reproduce complete and accurate posterior estimates in millisecond, can achieve the same quality of results as the trusted benchmark analyses used within the LIGOVirgo Collaboration.
The significance of our results is most evident in the orders of magnitude increase in speed over existing approaches. This will help the LIGOVirgo collaboration alert EM followup partners with minimum latency, enabling tightly coupled, closedloop control of sensing resources, for maximum information gain. Improved lowlatency alerts will be especially pertinent for signals from BNS mergers (e.g. GW170817 Abbott and others (2017)) and NSBH signals where parameter estimation speed will no longer be limiting factor^{5}^{5}5The complete lowlatency pipeline includes a number of steps. The process of GW data acquisition is followed by the transfer of data. There is then the corresponding analysis and the subsequent communication of results to the EM astronomy community after which there are physical aspects such as slewing observing instruments to the correct pointing. in observing the prompt EM emission expected on shorter time scales than is achievable with existing LIGOVirgo Collaboration (LVC) analysis tools such as Bayestar Singer and Price (2016).
The predicted number of future detections of BNS mergers ( The KAGRA Collaboration et al. (2013); The LIGO Scientific Collaboration and the Virgo Collaboration (2018)) will severely strain the GW community’s current computational resources using existing Bayesian methods. Future iterations of our approach will provide fullparameter estimation on compact binary coalescence (CBC) signals in second on a single GPU. Our trained network is also modular, and can be shared and used easily by any user to produce results. The specific analysis described in the letter assumes a uniform prior on the signal parameters. However, this is a choice and the network can be trained with any prior the user demands, or users can cheaply resample accordingly from the output of the network trained on the uniform prior. We also note that our method will be invaluable for population studies since populations may now be generated and analysed in a fullBayesian manner on a vastly reduced time scale.
For BBH signals, GW data is usually sampled at — kHz dependent upon the mass of binary. We have chosen to use the noticeably low sampling rate of 256Hz and a single detector configuration largely in order to decrease the computational time required to develop our approach. We do not anticipate any problems in extending our analysis to higher sampling frequencies other than an increase in training time and a larger burden on the GPU memory. Our lower sampling rate naturally limited the chosen BBH mass parameter space to high mass signals. We similarly do not anticipate that extending the parameter space to lower masses will lead to problems but do expect that a larger number of training samples may be required. Future work will incorporate a multidetector configuration at which point parameter estimation will be extended to sky localisation.
In reality, GW detectors are affected by nonGaussian noise artefacts and timedependent variation in the detector noise PSD. Existing methods incorporate a parameterised PSD estimation into their inference Littenberg and Cornish (2015). To account for these within our scheme, we would retrain our network at regular intervals using samples of real detector noise (preferably recent examples to best reflect the state of the detectors). Our work can naturally be extended to include the full range of CBC signal types but also to any and all other parameterised GW signals and to analyses of GW data beyond that of ground based experiments. Given the abundant benefits of this method, we hope that a variant of this of approach will form the basis for future GW parameter estimation.
I Acknowledgements.
We would like to acknowledge valuable input from the LIGOVirgo Collaboration, specifically from Will Farr and the parameter estimation and machinelearning working groups. We would additionally like to thank Szabi Marka for posing this challenge to us. We thank Nvidia for the generous donation of a Tesla V100 GPU used in addition to LVC computational resources. The authors also gratefully acknowledge the Science and Technology Facilities Council of the United Kingdom. CM and SH are supported by the Science and Technology Research Council (grant No. ST/ L000946/1) and the European Cooperation in Science and Technology (COST) action CA17137. FT acknowledges support from Amazon Research and EPSRC grant EP/M01326X/1, and RMS EPSRC grants EP/M01326X/1 and EP/R018634/1.
Ii addendum
ii.1 Competing Interests
The authors declare that they have no competing financial interests.
ii.2 Correspondence
Correspondence and requests for materials should be addressed to Hunter Gabbard (email: h.gabbard.1@research.gla.ac.uk).
Iii Methods
Conditional variational autoencoders are a form of variational autoencoder which are conditioned on an observation, where in our case the observation is a 1dimensional GW time series signal . The autoencoders from which variational autoencoders are derived are typically used for problems involving image reconstruction and/or dimensionality reduction. They perform a regression task whereby the autoencoder attempts to predict its own given input (model the identity function) through a “bottleneck layer”, a limited and therefore distilled representation of the input parameter space. An autoencoder is composed of two neural networks, an encoder and a decoder Gallinari et al. (1987). The encoder network takes as input a vector, where the number of dimensions is a fixed number predefined by the user. The encoder converts the input vector into a (typically) lower dimensional space, referred to as the latent space. A representation of the data in the latent space is passed to the decoder network which generates a reconstruction of the original input data to the encoder network. Through training, the two subnetworks learn how to efficiently represent a dataset within a lower dimensional latent space which will take on the most important properties of the input training data. In this way, the data can be compressed with little loss of fidelity. Additionally, the decoder simultaneously learns to decode the latent space representation and reconstruct that data back to its original form (the input data).
The primary difference between a variational autoencoder Pagnoni et al. (2018) and an autoencoder concerns the method by which locations within the latent space are produced. In our variant of the variational autoencoder, the output of the encoder is interpreted as a set of parameters governing statistical distributions (in our case the means and variances of multivariate Gaussians). In proceeding to the decoder network, samples from the latent space () are randomly drawn from these distributions and fed into the decoder, therefore adding an element of variation into the process. A particular input can then have a range of possible outputs. In both the decoder and the encoder networks we use fullyconnected layers (although this is not a constraint and any trainable network architecture may be used).
iii.1 Cost function derivation
We will now derive the cost function and the corresponding network structure and we begin with the statement defining the aim of the analysis. We wish to obtain a function that reproduces the posterior distribution (the probability of our physical parameters given some measured data ). The cross entropy between 2 distributions is defined in Eq. 2 where we have made the distributions explicitly conditional on (our measurement). In this case is the target distribution (the true posterior) and is the parametric distribution that we will use neural networks to construct. The variable represents the trainable neural network parameters.
The crossentropy is minimised when and so by minimising
(4) 
where indicates the expectation value over the distribution of measurements , we therefore make the parametric distribution as similar as possible to the target for all possible measurements .
Converting the expectation value into an integral over weighted by and applying Bayes’ theorem we obtain
(5) 
where is the prior distribution on the physical parameters .
The CVAE network outlined in Fig. 1 makes use of a conditional latent variable model and our parametric model is constructed from the product of 2 separate distributions marginalised over the latent space
(6) 
We have used and to indicate that the 2 separate networks modelling these distributions will be trained on these parameter sets respectively. Both new conditional distributions are modelled as dimensional multivariate uncorrelated Gaussian distributions (governed by their means and variances). However, this still allows to take a general form (although it does limit it to be unimodal).
One could be forgiven in thinking that by setting up networks that simply aim to minimise over the and would be enough to solve this problem. However, as shown in Sohn et al. (2015) this is an intractable problem and a network cannot be trained directly to do this. Instead we define a recognition function that will be used to derive an ELBO. Here we use to represent the trainable parameters of an encoder network ().
Let us first define the KL divergence between 2 of our distributions as
KL  (7)  
It can be shown, after some manipulation, that
(8) 
where the ELBO is given by
(9) 
and is sonamed since KL cannot be negative and has a minimum of zero. Therefore, if we were to find a function (optimised on ) that minimised the KLdivergence then we can state that
(10) 
After some further manipulation of Eq. 9 we find that
(11) 
We can now substitute this inequality into Eq. 5 (our cost function) to obtain
(12) 
which can in practice be approximated as a stochastic integral over draws of from the prior, from the likelihood function , and from the recognition function, giving us Eq. Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, the actual function evaluated within the training procedure.
Parameter name  symbol  min  max  units 
mass 1  35  80  solar masses  
mass 2  ^{6}^{6}6Additionally is constrained such that .  35  80  solar masses 
luminosity distance  1  3  Gpc  
time of coalescence  0.65  0.85  seconds  
phase at coalescence  0  radians  
right ascension  1.375  radians  
declination  1.2108  radians  
inclination  0  radians  
polarisation  0  radians  
spins    0    
epoch    1126259642  GPS time  
detector    LIGO Hanford   
iii.2 The training procedure
Having set up a cost function composed of 3 probability functions that have well defined inputs and outputs where the mapping of those inputs to outputs is governed by the parameter sets . These parameters are the weights and biases of 3 neural networks acting as (variational) encoder, decoder, and encoder respectively. To train such a network one must connect the inputs and outputs appropriately to compute the cost function and backpropagate cost function derivatives to update the network parameters. The network structure shown schematically in Fig. 1 shows how for a batch of sets of and corresponding values, the cost function is computed during each iteration of training.
Training is performed via a series of steps illustrated in Fig. 1.

The encoder is given a set of training GW signals () and encodes into a set of variables defining a distribution in the latent space. In this case describes the first 2 central moments (mean and variance) for each dimension of a uncorrolated (diagonal covariance) multivariate Gaussian distribution.

The encoder takes a combination of both the data and the true parameters defining the GW signal and encodes this into parameters defining another uncorrelated multivariate Gaussian distribution in the same latent space. These parameters we denote by again representing the means and variances.

We then sample from the distribution described by giving us samples within the latent space.

These samples, along with their corresponding data, then go to the decoder D which outputs , a set of parameters (much like ) that define the moments of an uncorrelated multivariate Gaussian distribution in the physical space.

The first term of the loss function (Eq. Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy) is then computed by evaluating the probability density defined by at the true training values. The component of the loss allows the network to learn how to predict accurate values of but to also learn the intrinsic variation due to the noise properties of the data . It is important to highlight that the GW parameter predictions from the decoder D do describe a multivariate Gaussian, but as is shown in our results (see Fig. 2), this does not imply that our final output posterior estimates will also be multivariate Gaussians.

Finally the loss component described by the KL divergence between the distributions described by and is computed using
KL (13) Here we highlight that we do not desire that the network tries to make these 2 distributions equal to each other. Rather, we want the ensemble network to minimise the total cost (of which this is a component).
As is standard practice in machine learning applications, the cost is computed over a batch of training samples and repeated for a predefined number of iterations.
iii.3 Network and Training parameters
For our purposes, we found that training iterations, a batch size of training samples and a learning rate of was sufficient. We used a total of training samples in order to adequately cover the BBH parameter space. We additionally ensure that an (effectively) infinite number of noise realizations are employed by making sure that every time a training sample is used it is given a unique noise realisation despite only having a finite number of waveforms. Each neural network (, , D) is composed of 3 fully connected layers and has neurons in each layer with ReLU Nair and Hinton (2010) activation functions between layers. We use a latent space dimension of and we consider training complete when both components to the loss function have converged to approximately constant values or when comparisons with benchmark test posteriors indicate no significant changes in the output posterior.
iii.4 The testing procedure
After training has completed and we wish to use the network for inference we follow the procedure described in the right hand panel of Fig. 1. Given a new data sample (not taken from the training set) we simply input this into the encoder from which we obtain a single value of describing a distribution (conditional on the data ) in the latent space. We then repeat the following steps:

We randomly draw a latent space sample from the latent space distribution defined by .

Our sample and the corresponding original data are fed as input to our pretrained decoder network (D). The decoder network returns a set of moments which describe a multivariate Gaussian distribution in the physical parameter space.

We then draw a random realisation from that distribution.
A comprehensive representation in the form of samples drawn from the entire joint posterior distribution can then be obtained by simply repeating this procedure with the same input data (see Eq. 6).
iii.5 Additional tests
The KL divergence between 2 distributions is a measure of their similarity and we use this to compare the output posterior estimates between samplers for the same input test data. To do this we run each independent sampler (including the CVAE) on the same test data to produce samples from the corresponding posterior. We then compute the KLdivergence between output distributions from each sampler with itself and each sampler with all other samplers. For distributions that are identical the KLdivergence is equal to zero but since we are representing our posterior distributions using finite numbers of samples, identical distributions should result in KLdivergence values . In Fig. 5 we show the distributions of these KLdivergences for the 256 test GW samples where we see that the CVAE approach when compared to the benchmark samplers have distributions consistent with those produced when comparing between 2 different benchmark samplers.
References
 Prospects for Observing and Localizing GravitationalWave Transients with Advanced LIGO and Advanced Virgo. Living Reviews in Relativity 19, pp. 1. External Links: 1304.0670, Document Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Observation of gravitational waves from a binary black hole merger. Phys. Rev. Lett. 116, pp. 061102. External Links: Document, Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Binary black hole mergers in the first advanced LIGO observing run. Phys. Rev. X 6, pp. 041015. External Links: Document, Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 GW170817: observation of gravitational waves from a binary neutron star inspiral. Phys. Rev. Lett. 119, pp. 161101. External Links: Document, Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 [5] Advanced LIGO sensitivity design curve. Note: https://dcc.ligo.org/LIGOT1800044/publicAccessed: 20190601 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Bilby: a userfriendly bayesian inference library for gravitationalwave astronomy. Astrophysical Journal Supplement Series. Note: The Astrophysical Journal Supplement Series (2019) 241, 27 External Links: arXiv:1811.02042, Document Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Limiting the effects of earthquakes on gravitationalwave interferometers. Classical and Quantum GravityLiving Reviews in Relativty 34 (4), pp. 044004. External Links: Document, Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Emcee: the mcmc hammer. PASP 125, pp. 306–312. External Links: 1202.3665, Document Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Matching matched filtering with deep networks for gravitationalwave astronomy. Phys. Rev. Lett. 120, pp. 141103. External Links: Document, Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Mémoires associatives distribuées: une comparaison (distributed associative memories: a comparison). In Proceedings of COGNITIVA 87, Paris, La Villette, May 1987, Cited by: §III.
 ConvWave: searching for gravitational waves with fully convolutional neural nets. In Workshop on Deep Learning for Physical Sciences (DLPS) at the 31st Conference on Neural Information Processing Systems (NIPS), External Links: Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Convolutional neural networks: a magic bullet for gravitationalwave detection?. External Links: arXiv:1904.08693 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Deep learning for realtime gravitational wave detection and parameter estimation: results with advanced ligo data. Physics Letters B 778, pp. 64 – 70. External Links: ISSN 03702693, Document, Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Deep transfer learning: a new deep learning glitch classification method for advanced LIGO. External Links: arXiv:1706.07446 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Phenomenological model for the gravitationalwave signal from precessing binary black holes with twospin effects. External Links: arXiv:1809.10113 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Bayesian inference for spectral estimation of gravitational wave detector noise. Phys. Rev. D 91 (8), pp. 084034. External Links: Document, 1410.3852 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML10), pp. 807–814. Cited by: §III.3.
 Handling incomplete heterogeneous data using vaes. External Links: arXiv:1807.03653 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Plug and play generative networks: conditional iterative generation of images in latent space. External Links: arXiv:1612.00005 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Conditional variational autoencoder for neural machine translation. External Links: arXiv:1812.04405 Cited by: §III, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Bayesian detection of unmodeled bursts of gravitational waves. Classical and Quantum Gravity 26 (15), pp. 155017. External Links: Document, 0809.2809 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Rapid Bayesian position reconstruction for gravitationalwave transients. Phys. Rev. D 93 (2), pp. 024013. External Links: Document, 1508.03634 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Nested sampling for general bayesian computation. Bayesian Anal. 1 (4), pp. 833–859. External Links: Document, Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 3483–3491. External Links: Link Cited by: §III.1, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Dynesty: a dynamic nested sampling package for estimating Bayesian posteriors and evidences. External Links: arXiv:1904.02180 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Prospects for observing and localizing gravitationalwave transients with advanced LIGO, advanced Virgo and KAGRA. Note: Living Reviews in Relativity; 21:3; 2018 External Links: arXiv:1304.0670, Document Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 GWTC1: A gravitationalwave transient catalog of compact binary mergers observed by LIGO and Virgo during the first and second observing runs. External Links: arXiv:1811.12907 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Variational inference for computational imaging inverse problems. External Links: arXiv:1904.06264 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 The pycbc search for gravitational waves from compact binary coalescence. Classical and Quantum Gravity 33 (21), pp. 215004. External Links: Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 CPNest. External Links: Document Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Robust parameter estimation for compact binaries with groundbased gravitationalwave observations using the lalinference software library. Physical Review D. Note: Phys. Rev. D 91, 042003 (2015) External Links: arXiv:1409.7215, Document Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy, Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Dynamic temperature selection for paralleltempering in Markov chain Monte Carlo simulations. Note: MNRAS (January 11, 2016) Vol. 455 19191937 External Links: arXiv:1501.05823, Document Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Attribute2Image: conditional image generation from visual attributes. External Links: arXiv:1512.00570 Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.
 Gravity spy: integrating advanced ligo detector characterization, machine learning, and citizen science. Classical and Quantum Gravity 34 (6), pp. 064003. External Links: Link Cited by: Bayesian parameter estimation using conditional variational autoencoders for gravitationalwave astronomy.