Dreaming of atmospheres

Dreaming of atmospheres

I. P. Waldmann Department of Physics & Astronomy, University College London, Gower Street, WC1E 6BT, UK ingo@star.ucl.ac.uk
Abstract

Here we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrievals of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep belief neural (DBN) networks trained to accurately recognise molecular signatures for a wide range of planets, atmospheric thermal profiles and compositions. Reconstructions of the learned features, also referred to as ‘dreams’ of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work towards retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.

Subject headings:
methods: data analysis — methods: statistical — techniques: spectroscopic — radiative transfer

1. Introduction

The atmospheric retrieval of exoplanetary emission/transmission spectra is a complex undertaking (e.g. Madhusudhan & Seager 2009; Lee et al. 2011b; Line et al. 2012; Benneke & Seager 2013; Griffith 2014; Waldmann et al. 2015a, b). Here, retrieval parameter dimensionality becomes an important factor to consider and though desirable, most times allowing for all known atmospheric species to be fitted is too computationally expensive. Hence, a user defined pre-selection of atmospheric absorbers/emitters must be made. A ‘seasoned user’ would make this pre-selection based on previous experiences and a qualitative recognition of absorption/emission features present in the observed spectrum. Here, the human brain is very good in abstracting previously seen patterns to unseen circumstances, a desirable feature to be replicated by machines .

As we move to an era of largely automated retrievals, through the provision of open-source code to the community and future ground and space-based spectroscopic surveys, it is important to strive towards universally applicable self-sufficient retrieval algorithms. In an ideal case scenario, the retrieval suite would posses recognition and learning capabilities similar to the ‘seasoned user’ and would not require any auxiliary user input but the observed spectrum itself. In other words, the program would understand what it is looking at, make a qualitative pre-selection of absorbing/emitting atmospheric species, followed by a quantitative retrieval.

In Waldmann et al. (2015a), we began working towards this end by introducing a pattern recognition algorithm, Marple. Based on principal-component analysis (PCA) facial-recognition approaches, Marple is able to rapidly sift through large molecular data bases and return a list of the most probable absorbing species in the observed spectrum. This information can then be fed to the -REx atmospheric retrieval code (Waldmann et al. 2015a, b) for a more quantitative analysis. Based on intrinsically linear coordinate transformations, Marple works well for transmission spectroscopy where the temperature-pressure profile (TP-profile) can be assumed to be isothermal and the transmission approximated by a linear system.

The emission spectroscopy case is more complicated. Here, the shape of spectral features strongly depends on the varying atmospheric thermal profile as well as varying molecular abundances. Such a non-linear system is often poorly captured by a principal component approach.

Consequently, we have developed a new neural-network based spectroscopic pattern recognition framework, RobERt (Robotic Exoplanet Recognition), capable of learning and abstracting highly non-linear systems and recognising spectral features found in emission spectroscopy.

In this paper we introduce the concept of deep-belief networks (DBNs) to the recognition of spectral features, describe the training set and algorithm used and discuss RobERt’s recognition abilities using simulated spectra.

2. RobERt

RobERt mimics human recognition of spectroscopic features by using a pre-trained deep belief neural network (Hinton 2006, 2007; Bengio et al. 2007a; Le Roux & Bengio 2010; Montavon et al. 2012; Bianchini & Scarselli 2014) at its core. DBNs are multi-layer non-linear transformations of the input data, in this case the emission spectrum, where each consecutive layer presents a progressively higher level of abstraction of the underlying features in the spectrum. These levels of abstraction are learned in an unsupervised (i.e. autonomous) fashion from a large catalogue of input spectra. Once these features are learned from the data, a second, supervised learning stage is used to assign the learned features to their correct labels (e.g. HO, CH, etc.).

Neural networks are now commonly used in complex classification tasks such as image recognition (e.g. Wang et al. 2014a; Shen et al. 2015; Liu et al. 2014; Krizhevsky et al. 2012), speech & music recognition (e.g. Hung et al. 2005; Jaitly & Hinton 2011; Zhang & Wu 2013; Pradeep & Kumaraswamy 2014), biology (e.g. Head-Gordon & Stillinger 1993; Plebe 2007; Wu & McLarty 2012; Spencer et al. 2015) and find increasing use in the classification of galaxies and cosmology (e.g. Collister & Lahav 2004; Agarwal et al. 2012; Karpenka et al. 2013; Agarwal et al. 2014; Reis et al. 2012; Huertas-Company et al. 2015; Ellison et al. 2015; du Buisson et al. 2015; Dieleman et al. 2015).

Whereas an in-depth derivation of DBNs is beyond the scope of this paper, we will briefly outline its underlying architecture and implementation. We refer the interested reader to Bengio (2009); Hinton (2012) and Fischer & Igel (2014) for detailed derivations.

2.1. Restricted Boltzmann Machines

Figure 1 shows a schematic of the the deep belief network. The multi-layer DBN can be constructed from several Restricted Boltzmann Machines (Freund & Haussler 1992; Bishop 2006; Le Roux & Bengio 2008; Lee et al. 2011a; Hinton 2012; Bengio 2009, 2012; Montavon et al. 2012; Fischer & Igel 2014) with the addition of a logistic regression layer at the top of the network. The RBM is a two-layer neural network able to learn the underlying probability distribution over its set of input values. It represents a particular kind of Markov Random Field (Davison 2008) consisting of one layer of binary or Gaussian stochastic visible units (the input data) and one layer of binary stochastic hidden units. In RBMs all hidden units are connected to all visible units but have no intra-layer dependence. Hence all hidden units given the visible units are statistically independent and we can write the probability of all visible units given all hidden units and vice versa as the product of the individual probabilities,

(1)
Figure 1.— Schematic outline of a Restricted Boltzmann Machine (RBM) on the left and a full Deep belief network (DBN) in the form of a Multi-layer Perceptron (MLP) on the right. Blue bottom layer are the ‘visible units’ which are set to the input spectrum during training and recognition. Red are layers of ‘hidden units’ forming increasingly abstract representations of the input layer the further up the network they are. Green represents logistic units linking data labels to the top-layer of hidden units. All units are connected (black lines) with all units in the layers above and below but not intra-level connections exist. It can be seen that the DBN can be built from three consecutive RBMs with the addition of a logistic regression layer.

where and are the column vectors of visible and hidden units respectively and and are their corresponding indices. We now want to find a configuration of the hidden layers, , that allows us to reconstruct the input, , with minimal error. Since and are factorial, we can write the activation functions of the individual visible and hidden binary units as

(2)

where is the sigmoid function

(3)

Assuming both visible and hidden units are binary, RBMs assign an energy term for each configuration of and

(4)

where and are bias vectors for the visible and hidden units respectively and is a matrix of connection weights between and . The probability over all visible and hidden units is now given by

(5)

where is the partition function

(6)

The probability over the visible units as given by the RBM can now be calculated by summing over all hidden units

(7)

We now train the RBM by finding a set of parameters, , that maximises the log-likelihood of the data, . The derivative of the log-likelihood with respect to the individual weights gives us the gradient on

(8)
(9)
(10)

where is the expectation value of all the hidden and visible unit activations given the training data and is the same expectation under the reconstructed model distribution. The cost function for the optimisation algorithm is now simply given by

(11)

where is a learning rate parameter.

Training can be performed using simple gradient descent. However, an exact calculation of is highly computationally expensive. The likelihood gradient can be approximated by sampling the likelihood using Gibbs sampling (Press et al. 2007). Here samples are iteratively drawn from and until the Markov Chain Monte Carlo (MCMC) sampling converges. Contrastive Divergence (CD, Hinton 2002) further simplifies the Gibbs sampling process by breaking the requirement for exact convergence and restricting the MCMC chain to a few (as few as one) iterations. This leads to significant gains in convergence speed. For an in-depth explanation of CD, we refer the reader to (Hinton 2002; Bengio et al. 2007b; Bengio 2009).

2.2. Deep Belief Networks

We now construct the DBN using RBMs as building blocks. As convention and in accordance with figure 1, we refer to the data input to be at the ‘bottom’ of the network and increase in abstraction as we go ‘up’ the network.

The bottom RBM has the normalised emission spectrum as input (i.e. visible) units. Here a binary representation of the observed data is not ideal and we replace with Gaussian units. These better represent the continuous values found in spectroscopic data. The hidden units and all higher DBN layers remain binary. For the Gaussian RBM layer, the unit activations (Krizhevsky 2009; Wang et al. 2014b) become

(12)

where is the Normal distribution and the standard deviation of the spectrum. Furthermore, we substitute the energy term (equation. 4) with

(13)

We now learn the RBM greedily until convergence and take the resulting hidden layer as input to the next higher up RBM. We repeat this process for three consecutive RBMs. This constitutes the unsupervised training stage as the DBN learns on un-labeled data.

Once the RBM layers are trained, we form a Multi-Layer Perceptron (MLP) by attaching a logistic regression layer to the top layer of the network (equation 12). This links the top most hidden units to the data labels (e.g. HO, CH, CO, etc.). We now greedily learn the whole network using stochastic gradient descent by presenting a spectrum of a given composition and its corresponding data label to the network. This supervised learning has two purposes: 1) it fine tunes the network, 2) it associates labels to the network. More specifically, in the supervised learning stage, the RBM layers are fixed and act as a feed-forward network. The logistic regression layer now learns the mapping between the high-level representations of the upper RBM layer and the associated data labels. We refer the interested reader to the standard literature (e.g. Bishop 2006; Hilbe 2009) for an in-depth treatment of logistic regression.

We learn the MLP using mini-batch stochastic gradient descent (Li et al. 2014). Mini-batches determine the number of training examples looked at simultaneously before updating the DBN weights. Looking at ‘chunks’ of data simultaneously, allows us to vectorise the gradient computation and achieve higher convergence speeds than for standard stochastic gradient descent methods. We did not require the use of any regularisations during supervised learning, but employ ‘early stopping’ criteria to avoid overfitting (see section 3.2). It is worth mentioning that ‘dropout’ algorithms (Hinton et al. 2012; Srivastava et al. 2014) have recently been shown to reach lower reconstruction errors than conventional supervised learning (with or without regularisation) and are found to be highly robust against overfitting, hence avoiding the need for early stopping criteria.

3. Implementation and training

RobERt is written in python using the scipy optimisation toolbox and the theano111https://pypi.python.org/pypi/Theano library. Theano is a very powerful graph and symbolic math toolbox with efficient parallelisation (through the BLAS library) and native GPU support. The training data was generated using -REx run with OpenMP parallelisation to produce the required grid of emission forward models.

3.1. Training data set

No. planets 5
Planets WASP-12b, HD189733b,
HD209458b, HAT-P-11b, GJ1214b
No. molecules 10
Molecules HO, HCN, CH, CO,
CO, NH, NO, SiO, TiO, VO
Abundance range
Compositions / planet 5
TP-profiles / planet 7
range 1 - 20m
Resolution 300 (constant)
Points / spectrum 900
Spectra / planet 17150
Spectra total 85750
Table 1Summary of training set

In the unsupervised training stage, RobERt requires a large set of example emission spectra to train with. Such a training set should include a broad range of planet types, atmospheric trace gasses and TP-profiles. We considered a total of five planets ranging from warm SuperEarths (GJ1214b, Charbonneau et al. 2009) to the strongly insolated hot-Jupiters (e.g. WASP-12b, Hebb et al. 2009). In total we simulated 17150 emission spectra per planet and 85750 spectra in total. Each spectrum contains only one trace gas species at a time and no mixtures are considered in the training set. Table 1 summarises the training set parameters. The creation of the training set took 3 hours on 96 Intel Xeon E5-2697v2 cpus.

The data set was now randomly divided into 80 training data and 20 test data. RobERt is only trained on the training data with random selection of spectra from the test data presented to RobERt at every iteration of the supervised learning to test RobERt’s prediction accuracy.

3.1.1 Normalisation

Figure 2.— Top: example spectrum of a hot-Jupiter (water only) generated by -REx. Bottom: the normalised emission spectrum used for training RobERt.

Before training RobERt on the catalogue of input spectra, we first normalise the input to a zero mean and unit variance grid. Though this is not strictly necessary, the normalisation significantly improves convergence properties of DBNs. The normalisation consists of three steps:

1) We normalise the emission spectrum with the Planckian of the planet’s host star to obtain the planetary intensity

(14)

where is the column vector of the planetary/stellar flux ratio and is the Planck function at the stellar temperature. This normalisation step ensures that the training process is not biased by the underlying stellar black body function.

2) We now convert into brigthness temperatures using

(15)

where is the Boltzmann constant, the Planck constant, the speed of light and the wavelength.

3) Finally we subtract the mean value of and normalise to unit variance to give the normalised spectrum 

(16)

Figure 2 shows an example input spectrum of HO before normalisation (top, blue) and after normalisation (bottom, red).

3.2. Training

RobERt is now set up to contain three RBM levels of 500, 200 and 50 neurons from bottom to top respectively, with the input data vector containing 900 spectral points. As discussed in section 6, we find that slightly smaller networks have similar performance levels but larger networks are too redundant.

The unsupervised training stage ran over 100 iterations per RBM level at a learning rate of . We find that for all layers, convergence is typically reached between the 80 - 90 iteration. During the supervised training stage, we adopt a learning rate of and a mini-batch sizes of typically 100 training spectra. The reconstruction error of the DBN given the test data is computed at each training epoch. Convergence of the supervised learning is reached when no improvement in reconstruction error is obtained over a maximum of 20 epochs and the iteration with the lowest reconstruction error is then taken as final result. This early stopping prevents significant overfitting during the supervised training stage.

The full training process takes 1.5h on 6 cpu cores or 10 min. using a Nvidia Tesla K40 card (2880 GPUs). RobERt completes the supervised training stage with a test data recognition accuracy of 99.7.

4. Recognition of emission spectra

Figure 3.— Left: example normalised emission spectra at S/N = 20. From top to bottom the spectral compositions are: 1) HO (); 2) CH (); 3) HO () & CH (); 4) HO (), CH () & TiO (). Right: Corresponding probability of the molecule being present in the spectrum to the left. All probabilities are normalised () for clarity and colour coded to represent 4 different S/N values of the input spectrum: 20 (black), 10 (brown), 5 (orange), 2 (yellow).

One major advantage of DBNs is their ability to generalise patterns over large ranges of parameter spaces, both seen and perviously unseen by the network. To demonstrate this behaviour, we generated emission spectra of the hot-Jupiter WASP-76b (West et al. 2013), unknown to RobERt, for a variety of trace gas molecules, mixtures and signal-to-noise (S/N) ratios. The spectral recognition process now proceeds in three stages:

1) The observed spectrum is normalised following the steps described in section 3.1.1.

2) The mean of each spectral bin is randomly perturbed within the measurement error bar, resulting in a ‘noisy’ spectrum.

3) The visible units of the DBN are set to the normalised, noisy spectrum and the DBN is run in the forward direction to obtain the label probabilities .

Steps 2 & 3 are repeated 100 times and the label probabilities recorded, summed and normalised.

Figure 3 shows four normalised example spectra and the results of RobERt’s identification for S/N ratios of 20, 10, 5 and 2. Spectra containing only one main trace gas components are recovered 99 of the time, across all planet types considered. This remains true for strongly saturated spectra with molecular abundances of and very low S/N values. Surprisingly even S/N ratios of 0.5 - 1.0 allow RobERt to recognise the dominant trace gas component with good accuracy. RobERt was trained on only individual trace gases, i.e. pure water spectra or pure methane spectra, but not on mixtures of trace gasses. This is mainly due to the very large number of permutations required to represent mixtures of molecules accurately over varying abundances and TP-profiles in the training data. It is hence encouraging to see that RobERt understands mixtures well when presented with them. Figure 3 shows two examples of spectra containing HO + CH and HO, CO & TiO. In the three molecules example, RobERt identifies the main constituents, water and carbon-dioxide, with a high probability and the third constituent is either attributed to TiO, VO, CO or NO with TiO having the highest probability of these candidates. In an automated retrieval context, the retrieval code would run a first pass with CO, HO, TiO, VO, CO & NO as input and proceed to nested model down-selection in subsequent retrieval runs (Waldmann et al. 2015a).

4.1. Restricted wavelength ranges and resolution miss-matches

Whereas it is more adequate to train the DBN with instrument specific resolutions and wavelength ranges, e.g. for HST/WFC3, JWST/MIRI & JWST/NIRSPEC, it is an intriguing exercise in itself to explore the effect of incomplete wavelength ranges on RobERt’s ability to recognise molecular species. As stated previously, in this example RobERt was trained on a wavelength grid ranging from 1-20m with a constant resolution of 300. Figure 4 shows the normalised water-only emission spectrum for the HST/WFC3 G141 grism wavelength range (yellow spectrum). The remaining spectrum outside the wavelength range considered is padded with zeros on both sides. RobERt is clearly able to identify water as dominant trace gas. We now consider increasingly restrictive wavelength ranges until the clear water detection breaks down at the 1.26 - 1.53m bandpass and RobERt attributes nearly equal probabilities to HO, CH and NO. Whilst initially surprising, upon closer inspection all three molecular species have strong overlapping features in this wavelength range (blue and black lines in figure 4 show the normalised spectrum of NO at and CH at respectively) and a ‘visual’ separation of molecules becomes very difficult.

We now investigate the effect of resolution miss-matches between the observed data and the resolution with which the DBN is trained. As expected, downsampling from a higher resolution to the DBN resolution does not impair recognition efficiency. The effect of upsampling, i.e. interpolating the observed spectrum to the resolution of the DBN, is more case dependent. We find no degradation of the recognition efficiency upsampling broad absorbing species such as HO or CH from resolutions as low as R = 30 to the native resolution of the DBN. Here, the interpolation simply adds noise to the spectrum against which the DBN is very robust. Generally speaking, all molecules can be identified unless their features are strongly undersampled. Trace gases with more narrow emission/absorption bands (e.g. CO, NO) are hence more strongly affected. For the molecular mixtures considered here, we find a conservative lower limit of R 25 (constant with ) below which feature detection becomes difficult. It should be noted that a strongly undersampled spectrum will always be difficult to interpret independently of the methodology used.

Figure 4.— Similar to figure 3; Left shows input spectrum at S/N = 20 for a normalised water spectrum in the HST/WFC3 G141 grism passband (yellow, 1.1 - 1.8 m). Darker colour shading represents progressively smaller passbands for which the recognition was performed. Blue dashed and black dotted lines show normalised spectra of NO and CH respectively. Right shows the corresponding detection probability per molecule for the varying wavelength ranges. Water is readily recognised to be the main trace gas component but for the smallest bandpass considered where HO, CH and NO are assigned roughly equal probabilities. As can be seen in the left plot, HO, CH and NO normalised spectra all have very similar features when only the most restricted (darkest shaded) spectral range is considered.

5. Dreaming of atmospheres

Figure 5.— Spectral reconstruction (or ‘dreaming’) of three molecules HO, CO & TiO. Top three panels show neuron activations for the bottom (L1) to top (L3) Restricted Boltzmann Machine layers. Bottom two rows show normalised HO, CO & TiO spectra reconstructed by the neural network and real data examples as comparison. The similarities between ‘dreamed’ and real spectral features are striking. This indicates a good representation of molecular features in the neural network.

When RobERt is used for recognition purposes we set the visible units to the values of the input spectrum and propagate the network forward (i.e. upwards) to obtain a classification label. Another approach to qualitatively check the convergence quality of the DBN is to reverse the network and propagate the network weights backwards (i.e. downwards) starting from a label. In other words, we activate the, say, HO label and RobERt will return what it ‘thinks’ are the defining features of a water spectrum. This backwards propagation is commonly referred to as ‘dreaming’ in the machine learning literature. Figure 5 shows dreams of three molecules, HO, CO and TiO. We compare these dreams with real, normalised spectra with abundances of underneath. The likeness of the dreamed spectra with real data is striking. L1, L2 and L3 represent the neural activations of the bottom, middle and top RBMs respectively. We find the neural activations in the dream state to be a useful indicator of the sparsity (i.e. number of units set to or close to zero) of the neural network and find networks with 10 average sparsity to yield the most accurate spectral reconstructions.

6. Discussion

The size of the DBN is an important factor to be considered, RobERt consists of three RBMs á 500, 200 and 50 neurons from bottom to top respectively. We find a three layer DBN to work best but also find that networks with too many neurons per layer, particularly in the upper levels, lead to noisy reconstructions, low maximum likelihoods, and a poorer recognition performance. We attribute this effect to a high level of redundancy in the network, which introduces noise. As described above, by inspecting the neural activations during the dream state of RobERt, we can measure the sparsity of individual layers for individual states (i.e. molecule activations). Tests have shown that 10 in sparsity averaged across activation states produces the most robust and highest S/N networks. Smaller, simpler networks run the risk of not being able to differentiate between molecules correctly.

As stated previously, RobERt has only been trained on spectra containing one trace gas at a time. Despite this obvious limitation, we show in section 4 that RobERt is indeed able to identify mixtures of molecules, though caveats to this capability should be mentioned. Similar to inspecting a spectrum by eye, RobERt is able to identify mixtures if the trace gas signatures are very different to one another (e.g. HO and CO, figure 5) or if sufficient wavelength coverage is provided (e.g. CH and HO, figure 3). The DBN struggles whenever either too little wavelength coverage is available (e.g. figure 4) or the secondary trace gas is of an order of magnitude less abundant than the primary absorber/emitter, i.e. secondary signatures imprint themselves as noise on the main absorber/emitter.

Though some of these limitations are fundamental (i.e. insufficient wavelength coverage, too low S/N, etc.), future work will investigate the use of convolutional deep belief networks (e.g. Lee et al. 2011a) to boost recognition accuracy by learning the localised correlations in the observed spectra. Additionally, an updated supervised learning cost-function is imaginable where not the identification of a single trace gas is rewarded but instead a ‘best ranking’ of groups of molecules.

As pre-selector to the -REx retrieval suite, RobERt will provide rankings of the most likely molecules to be considered in the quantitative retrieval. This is an iterative process with the retrieval models increasing in complexity, from the simplest atmospheres (containing only the few most likely molecular absorbers/emitters detected by RobERt) to more complex models (containing less likely opacities). The Bayes factor is the measure of convergence here (Waldmann et al. 2015a). In future implementations of RobERt, online-learning will become important after its initial training phase is complete. With each new data set, RobERt will be able to update and improve its DBN, taking the -REx results as labeled training set. Such an application is particularly suited as part of a larger data reduction/analysis pipeline for future large scale ground and space-based surveys.

7. Conclusion

In this paper we present the use of Deep belief networks in the identification and classification of exoplanetary emission spectra. We have shown that DBNs are well suited to identifying molecular signatures in extrasolar planet spectra. They are very robust to low S/N ratios and are able to identify trace gases even when wavelength ranges are strongly restricted compared to the initial training setup. This property is important as training a DBN is relatively computationally intensive and hence one would ideally want the trained DBN to be as universally applicable as possible. Their ability to abstract and generalise non-linear systems very effectively, makes DBNs an ideal tool for qualitative ‘pre-selection’ of parameter spaces for spectral retrieval applications.

Acknowledgements

IPW thanks G. Tinetti, R. Varley, M. Rocchetto, A. Tsiaras & G. Morello for useful discussions. This work was supported by the ERC project 617119 (ExoLights).

References

  • Agarwal et al. (2012) Agarwal, S., Abdalla, F. B., Feldman, H. A., Lahav, O., & Thomas, S. A. 2012, Monthly Notices of the Royal Astronomical Society, 424, 1409
  • Agarwal et al. (2014) —. 2014, Monthly Notices of the Royal Astronomical Society, 439, 2102
  • Bengio (2009) Bengio, Y. 2009, Learning Deep Architectures for AI, Vol. 2 (Now Publishers Inc)
  • Bengio (2012) —. 2012, in Neural Networks: Tricks of the Trade (Berlin, Heidelberg: Springer Berlin Heidelberg), 437–478
  • Bengio et al. (2007a) Bengio, Y., Lamblin, P., Popovici, D., & Larochelle, H. 2007a, Advances in Neural Information Processing Systems, 19, 153
  • Bengio et al. (2007b) —. 2007b, in Advances in Neural Information Processing Systems 19, ed. B. Schölkopf, J. Platt, & T. Hoffman (MIT Press), 153–160
  • Benneke & Seager (2013) Benneke, B., & Seager, S. 2013, APJ, 778, 153
  • Bianchini & Scarselli (2014) Bianchini, M., & Scarselli, F. 2014, Neural Networks and Learning …, 25, 1553
  • Bishop (2006) Bishop, C. M. 2006, Pattern Recognition and Machine Learning (Springer Verlag)
  • Charbonneau et al. (2009) Charbonneau, D., Berta, Z. K., Irwin, J., et al. 2009, Nature, 462, 891
  • Collister & Lahav (2004) Collister, A. A., & Lahav, O. 2004, PASP, 116, 345
  • Davison (2008) Davison, A. C. 2008, Statistical Models (Cambridge University Press)
  • Dieleman et al. (2015) Dieleman, S., Willett, K. W., & Dambre, J. 2015, APJ, 450, 1441
  • du Buisson et al. (2015) du Buisson, L., Sivanandam, N., Bassett, B. A., & Smith, M. 2015, APJ, 454, 2026
  • Ellison et al. (2015) Ellison, S. L., Teimoorinia, H., Rosario, D. J., & Mendel, J. T. 2015, APJ, 455, 370
  • Fischer & Igel (2014) Fischer, A., & Igel, C. 2014, Pattern Recognition, 47, 25
  • Freund & Haussler (1992) Freund, Y., & Haussler, D. 1992, Advances in Neural Information Processing Systems, 4, 912
  • Griffith (2014) Griffith, C. A. 2014, Philosophical Transactions of the Royal Society A: Mathematical, 372, 30086
  • Head-Gordon & Stillinger (1993) Head-Gordon, T., & Stillinger, F. H. 1993, Physical Review E (Statistical Physics, 48, 1502
  • Hebb et al. (2009) Hebb, L., Collier-Cameron, A., Loeillet, B., et al. 2009, APJ, 693, 1920
  • Hilbe (2009) Hilbe, J. H. 2009, Logistic Regression Models (Chapman & Hall)
  • Hinton (2002) Hinton, G. E. 2002, Neural Computation, 14, 1771
  • Hinton (2006) —. 2006, Science, 313, 504
  • Hinton (2007) —. 2007, Trends in cognitive sciences, 11, 428
  • Hinton (2012) —. 2012, in Neural Networks: Tricks of the Trade (Berlin, Heidelberg: Springer Berlin Heidelberg), 599–619
  • Hinton et al. (2012) Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. R. 2012, arXiv preprint arXiv:1207.0580
  • Huertas-Company et al. (2015) Huertas-Company, M., Gravet, R., Cabrera-Vives, G., et al. 2015, APJ, 221, 8
  • Hung et al. (2005) Hung, J. C., Wang, C.-S., Yang, C.-Y., Chiu, M.-S., & Yee, G. 2005, in 19th International Conference on Advanced Information Networking and Applications (AINA’05) Volume 1 (AINA papers) (IEEE), 157–162
  • Jaitly & Hinton (2011) Jaitly, N., & Hinton, G. 2011, in ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE), 5884–5887
  • Karpenka et al. (2013) Karpenka, N. V., Feroz, F., & Hobson, M. P. 2013, MNRAS, 429, 1278
  • Krizhevsky (2009) Krizhevsky, A. 2009, Learning multiple layers of features from tiny images, Tech. rep., University of Toronto
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., & Hinton, G. E. 2012, Advances in neural …, 1097
  • Le Roux & Bengio (2008) Le Roux, N., & Bengio, Y. 2008, Neural Computation, 20, 1631
  • Le Roux & Bengio (2010) —. 2010, Neural Computation, 22, 2192
  • Lee et al. (2011a) Lee, H., Grosse, R., Ranganath, R., & Ng, A. Y. 2011a, Communications of the ACM, 54, 95
  • Lee et al. (2011b) Lee, J. M., Fletcher, L. N., & Irwin, P. G. J. 2011b, Monthly Notices of the Royal Astronomical Society, 420, 170
  • Li et al. (2014) Li, M., Zhang, T., Chen, Y., & Smola, A. J. 2014, in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14 (New York, NY, USA: ACM), 661–670
  • Line et al. (2012) Line, M. R., Zhang, X., Vasisht, G., et al. 2012, The Astrophysical Journal, 749, 93
  • Liu et al. (2014) Liu, P., Han, S., Meng, Z., & Tong, Y. 2014, in 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE), 1805–1812
  • Madhusudhan & Seager (2009) Madhusudhan, N., & Seager, S. 2009, The Astrophysical Journal, 707, 24
  • Montavon et al. (2012) Montavon, G., Orr, G., & Müller, K.-R. 2012, Lecture Notes in Computer Science, Vol. 7700, Neural Networks: Tricks of the Trade (Berlin, Heidelberg: Springer)
  • Plebe (2007) Plebe, A. 2007, Neurocomputing, 70, 2060
  • Pradeep & Kumaraswamy (2014) Pradeep, R., & Kumaraswamy, R. 2014, in 2014 National Conference on Communication, Signal Processing and Networking (NCCSN) (IEEE), 1–5
  • Press et al. (2007) Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 2007, Numerical Recipes 3rd Edition: The Art of Scientific Computing, 3rd edn. (New York, NY, USA: Cambridge University Press)
  • Reis et al. (2012) Reis, R. R. R., Soares-Santos, M., Annis, J., et al. 2012, The Astrophysical Journal, 747, 59
  • Shen et al. (2015) Shen, S., Li, X., & Zhu, S. 2015, Electronics Letters, 51, 905
  • Spencer et al. (2015) Spencer, M., Eickholt, J., & Cheng, J. 2015, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 12, 103
  • Srivastava et al. (2014) Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. 2014, Journal of Machine Learning Research, 15, 1929
  • Waldmann et al. (2015b) Waldmann, I. P., Rocchetto, M., Tinetti, G., et al. 2015b, The Astrophysical Journal, 813, 13
  • Waldmann et al. (2015a) Waldmann, I. P., Tinetti, G., Rocchetto, M., et al. 2015a, The Astrophysical Journal, 802, 107
  • Wang et al. (2014a) Wang, N., Melchior, J., & Wiskott, L. 2014a, arXiv.org, 5900
  • Wang et al. (2014b) —. 2014b, CoRR, abs/1401.5900
  • West et al. (2013) West, R. G., Almenara, J. M., Anderson, D. R., et al. 2013, arXiv.org, 5607
  • Wu & McLarty (2012) Wu, C. H., & McLarty, J. W. 2012, Neural Networks and Genome Informatics (Elsevier)
  • Zhang & Wu (2013) Zhang, X.-L., & Wu, J. 2013, Audio, 21, 697
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
316466
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description