Cosmological N-body simulations: a challenge for scalable generative models

Cosmological -body simulations: a challenge for scalable generative models

Abstract

Deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAs) have been demonstrated to produce images of high visual quality. However, the existing hardware on which these models are trained severely limits the size of the images that can be generated. The rapid growth of high dimensional data in many fields of science therefore poses a significant challenge for generative models. In cosmology, the large-scale, three-dimensional matter distribution, modeled with -body simulations, plays a crucial role in understanding the evolution of structures in the universe. As these simulations are computationally very expensive, GANs have recently generated interest as a possible method to emulate these datasets, but they have been, so far, mostly limited to two dimensional data. In this work, we introduce a new benchmark for the generation of three dimensional -body simulations, in order to stimulate new ideas in the machine learning community and move closer to the practical use of generative models in cosmology. As a first benchmark result, we propose a scalable GAN approach for training a generator of -body three-dimensional cubes. Our technique relies on two key building blocks, (i) splitting the generation of the high-dimensional data into smaller parts, and (ii) using a multi-scale approach that efficiently captures global image features that might otherwise be lost in the splitting process. We evaluate the performance of our model for the generation of -body samples using various statistical measures commonly used in cosmology. Our results show that the proposed model produces samples of high visual quality, although the statistical analysis reveals that capturing rare features in the data poses significant problems for the generative models. We make the data, quality evaluation routines, and the proposed GAN architecture publicly available at https://github.com/nperraud/3DcosmoGAN.

\kwd
\DeclareUnicodeCharacter

00A0 \startlocaldefs \endlocaldefs {fmbox} \docheadResearch addressref=aff1, corref=aff1, email=nathanael.perraudin@sdsc.ethz.ch ]\initsNP\fnmNathanaël \snmPerraudin addressref=aff3, email=ankitsrivastavanit@gmail.com ]\initsAS\fnmAnkit \snmSrivastava addressref=aff2, email=aurelien.lucchi@inf.ethz.ch ]\initsAL\fnmAurelien \snmLucchi addressref=aff3, email=tomaszk@phys.ethz.ch ]\initsTK\fnmTomasz \snmKacprzak addressref=aff2, email=thomas.hofmann@inf.ethz.ch ]\initsTH\fnmThomas \snmHofmann addressref=aff3, email=alexandre.refregier@phys.ethz.ch ]\initsAR\fnmAlexandre \snmRéfrégier

{artnotes}{abstractbox}

generative models \kwdcosmological simulations \kwdNbody simulations \kwdgenerative adversarial network \kwdfast cosmic web simulations \kwdscalable GAN \kwdmulti-dimensional images

1 Introduction

The recent advances in the field of deep learning have initiated a new era for generative models. Generative Adversarial Networks (GANs) [20] have become a very popular approach by demonstrating their ability to learn complicated representations to produce high-resolution images [31]. In the field of cosmology, high-resolution simulations of matter distribution are becoming increasingly important for deepening our understanding of the evolution of the structures in the universe [53, 44, 34]. These simulations are made using the -body technique, which represents the distribution of matter in 3D space by trillions of particles. They are very slow to run and computationally expensive, as they evolve the positions of particles over cosmic time in small time intervals. Generative models have been proposed to emulate this type of data, dramatically accelerating the process of obtaining new simulations, after the training is finished [47, 43].

-body simulations represent the matter in a cosmological volume, typically between 0.1 - 10 Gpc, as a set of particles, typically between 100 to 2000. The initial 3D positions of the particles are typically drawn from a Gaussian random field with a specific power spectrum. Then, the particles are displaced over time according to the laws of gravity, properties of dark energy, and other physical effects included in the simulations. During this evolution, the field is becoming increasingly non-Gaussian, and displays characteristic features, such as halos, filaments, sheets, and voids [5, 13].

-body simulations that consist only of dark matter effectively solve the Poisson’s equation numerically. This process is computationally expensive, as the forces must be recalculated in short time intervals to retain the precision of the approximation. This leads to the need for frequent updates of the particle positions. The speed of these simulations is a large computational bottleneck for cosmological experiments, such as the Dark Energy Survey1, Euclid2, or LSST3.

Recently, GANs have been proposed for emulating the matter distributions in two dimensions [47, 43]. These approaches have been successful in generating data of high visual quality, and almost indistinguishable from the real simulations to experts. Moreover, several summary statistics often used in cosmology, such as power spectra and density histograms, also revealed good levels of performance. Some challenges still remain when comparing sets of generated samples. In both works, the properties of sets of generated images did not match exactly; the covariance matrix of power spectra of the generated maps differed by order of 10% with the real maps.

While these results are encouraging, a significant difficulty remains in scaling these models to generate three-dimensional data, which include several orders of magnitude more pixels for a single data instance. We address this problem in this work. We present a publicly available dataset of -body cubes, consisting of 30 independent instances. Due to the fact that the dark matter distribution is homogeneous and isotropic, and that the simulations are made using periodic boundary condition, the data can be easily augmented through shifts, rotations, and flips. The data is in the form of a list of particles with spatial positions , , . It can be pixelised into 3D histogram cubes, where the matter distribution is represented in density voxels. Each voxel contains the count of particles falling into it. If the resolution of the voxel cube is high enough, the particle- and voxel-based representations should be able to be used interchangeably for most of the applications. Approaches to generate the matter distribution in the particle-based representation could also be designed; in this work, however, we focus on the voxel-based representation. By publishing the -body data and the accompanying codes we aim to encourage the development of large scale generative models capable of handling such data volumes.

We present a benchmark GAN system to generate 3D -body voxel cubes. Our design of the novel GAN architecture scales to volumes of 256 voxels. Our proposed solution relies on two key building blocks. First, we split the generation of the high-dimensional data into smaller patches. Instead of assuming that the distribution of each patch is independent of the surrounding context, we model it as a function of the neighboring patches. Although splitting the generation process into patches provides a scalable solution to generate images of arbitrary size, it also limits the field of view of the generator, reducing its ability to learn global image features. The second core idea of our method addresses this problem by relying on a multi-scale approach that efficiently captures global dependencies that might otherwise be lost in the splitting process.

Our results constitute a baseline solution to the challenge. While the obtained statistical accuracy is currently insufficient for a real cosmological use case, we achieve two goals: (i) we demonstrate that the project is tractable by GAN architectures, and (ii) we provide a framework for evaluating the performance of new algorithms in the future.

1.1 Related work

Generative models that produce novel representative samples from high-dimensional data distributions are increasingly becoming popular in various fields such as image-to-image translation [63], or image in-painting [28] to name a few. There are many different deep learning approaches to generative models. The most popular ones are Variational Auto-Encoders (VAE) [32], Autoregressive models such as PixelCNN [58], and Generative Adversarial Networks (GAN) [20]. Regarding prior work for generating 3D images or volumes, two main types of architectures – in particular GANs – have been proposed. The first type [2, 16] generates 3D point clouds with a 1D convolutional architecture by producing a list of 3D point positions. This type of models does not scale to cases where billion of points are present in a simulation, posing an important concern given the size of current and future -body simulations. The second type of approaches, including [61, 42], directly uses 3D convolutions to produce a volume. Although the computation and memory cost is independent of the number of particles, it scales with the number of voxels of the desired volume, which grows cubically with the resolution. While recursive models such as PixelCNN [58] can scale to some extent, they are slow to generate samples, as they build the output image pixel-by-pixel in a sequential manner. We take inspiration from PixelCNN to design a patch-by-patch approach, rather than a pixel-by-pixel approach, which significantly speeds up the generation of new samples.

As mentioned above, splitting the generation process into patches reduces the ability of the generator to learn global image features. Some partial solutions to this problem can already be found in the literature, such as the Laplacian pyramid GAN [12] that provides a mechanism to learn at different scales for high quality sample generation, but this approach is not scalable as the sample image size is still limited. Similar techniques are used in the problem of super-resolution [36, 35, 60]. Recently, progressive growing of GANs [31] has been proposed to improve the quality of the generated samples and stabilize the training of GANs. The size of the samples produced by the generator is progressively increased by adding layers at the end of the generator and at the beginning of the discriminator. In the same direction, [8, 37] achieved impressive quality in the generation of large images by leveraging better optimization. Problematically, the limitations of the hardware on which the model is trained occur after a certain increase in size and all of these approaches will eventually fail to offer the scalability we are after.

GANs were proposed for generating matter distributions in 2D. A generative model for the projected matter distribution, also called a mass map, was introduced by [43]. Mass maps are cosmological observables, as they are reconstructed by techniques such as, for example, gravitational lensing [10]. Mass maps arise through integration of the matter density over the radial dimension with a specific, distance-dependent kernel. The generative model presented in [43] achieved very good agreement with the real data several important non-Gaussian summary statistics: power spectra, density histograms, and Minkowski functionals [51]. The distributions of these summaries between sets of generated and real data also agreed well. However, the covariance matrix of power spectra within the generated and real sets did not match perfectly, differing by the order of 10%.

A generative model working on 2D slices from -body simulations was developed by [47]. -body slices have much more complex features, such as filaments and sheets, as they are not averaged out in projection. Moreover, the dynamic range of pixel values spans several orders of magnitude. GANs presented by [47] also achieved good performance, but only for larger cosmological volumes of 500 Mpc. Some mismatch in the power spectrum covariance was also observed.

Alternative approaches to emulating cosmological matter distributions using deep learning have been recently been proposed. Deep Displacement Model [23] uses a U-shaped neural network that learns how to modify the positions of the particles from initial conditions to a given time in the history of the universe.

Generative models have also been proposed for solving other problems in cosmology, such as generation of galaxies [45], adding baryonic effects to the dark matter distribution [57], recovery of certain features from noisy astrophysical images [50], deblending galaxy superpositions [46], improving resolution of matter distributions [33].

Figure 1: An example -body simulation at current cosmological time (redshift ). The density is represented by particle positions in 3D. In this work, the generative models use the representation of this distributon that is based on 3D voxel histogram of the particle positions.

2 The -body data

2.1 Cosmological -body simulations

The distribution of matter, dark matter and other particles in the universe at large scale, under the influence of gravity, forms a convoluted network-like structure called the cosmic web [5, 11, 17, 13]. This distribution contains information vital to the study of dark matter, dark energy, and the very laws of gravity [1, 25, 29]. Simulations of these various computational cosmological models [54, 44] lead to understanding of the fundamentals of cosmological measurements [18, 9], and other properties of the universe [53]. These simulations are done using -body techniques. -body techniques simulate the cosmic web using a set of particles in three dimensional space, and evolve their positions with time. This evolution is governed by the underlying cosmological model and the laws of gravity. The end result of an -body simulation is the position of billions of particles in space, as depicted in Figure 1. Unfortunately, -body simulations are extremely resource intensive, as they require days, or even weeks of computation to produce them [56, 7]. Moreover, a large number of these -body simulations is needed to obtain good statistical accuracies, which further increases the computational requirements.

This computational bottleneck opens up a leeway for deep learning and generative models to offer an alternative solution to the problem. Generative models have the potential to be able to learn the underlying data distribution of the -body simulations using a seed set of -body samples to train on.

There are multiple techniques for running -body simulations, which agree well for large scales, but start to diverge for small scales, around wavenumber [52]. Moreover, baryonic feedback can also affect the small scale matter distribution [39, 27, 4], and large uncertainty remains for these scales.

2.2 Data preprocessing

We produce samples of the cosmic web using standard -body simulation techniques. We used L-PICOLA [26] to create 30 independent simulations. The cosmological model used was CDM with Hubble constant 4, dark energy density and matter density . We used the particle distribution at redshift . The output of the simulator is a list of particles 3D positions. To obtain the matter distribution, we first convert it to a standard 3D cube using histogram binning. We consider these cubes as the raw data for our challenge and can be downloaded at https://zenodo.org/record/1464832. The goal is to build a generative model able to produce new 3D histograms. While samples might seem as a low number of samples to train a deep neural network, each sample contains a large number of voxels. One can also expand the training data by relying on data augmentation, using various rotations and circular translations as described in Appendix B. Problematically, the dynamic range of this raw data spans several orders of magnitude and the distribution is skewed towards smaller values, with a very elongated tail towards the larger values. Empirically, we find that this very skewed distribution makes learning a generative model difficult. Therefore, we first transform the data using a logarithm-based function, as described in Appendix A. Note that this transform needs to be inverted before the evaluation procedure.

Figure 2: Multiscale approach using multiple intermediary GANs, each learning features at different scales and all trained independently from each other. The same approach can in principle be extended to produce higher-resolution images, potentially adding more intermediary GANs if necessary.

2.3 Evaluation procedure

The evaluation of a GAN is not a simple task [6, 21]. Fortunately, following [47], we can evaluate the quality of the generated samples with three summary statistics commonly used in the field of cosmology.

  1. Mass histogram is the average (normalized) histogram of the pixel value in the image. Note that the pixel value is proportional to the matter density.

  2. Power spectrum density (or PSD) is the amplitude of the Fourier modes as a function of their frequency (the phase is ignored). Practically, a set of bins is defined and the modes of similar frequency are averaged together.

  3. Peak histogram is the distribution of maxima in the density distribution, often called “peak statistics”, or “mass function”. Peaks capture the non-Gaussian features present in the cosmic web. This statistic is commonly used on weak lensing data [30, 38]. A peak is defined as a pixel greater than every pixel in its 2-pixels neighbourhood (24 pixels for 2D and 124 for 3D).

Other statistics such as Minkowski functionals, three point correlation functions, or phase distributions, could be considered. Nevertheless, we find that the three aforementioned statistics are currently sufficient to compare different generative models.

Distance between statistics

We define a score that reflects the agreement of the 3 aforementioned cosmological measures. Problematically, the scalars forming the 3 vectors representing them have very different scales and their metrics should represent the relative error instead of the absolute one. For this reason, we first compute the logarithm (in base 10) of the computed statistics . As all statistics are positive but not strictly positive, we add a small value before computing the logarithm, i.e., . is set to the maximum value of the statistic averaged over all real samples divided by .

At this point, the relative difference connects to the difference of the real and fake , i.e.: . One could quantify the error using a norm, i.e.: . However, such a distance does not take into account second-order moments. Rather, we take inspiration from the Fréchet Inception Distance [24]. We start by modeling the real and fake log statistics as two multivariate Gaussian distributions. This allows us to compute the Fréchet Distance (FD) between the two distributions [19], which is also the Wasserstein-2. The FD between two Gaussian distribution with mean and covariance and is given by [15]:

(1)

Note that [15] also proves that is a metric for the space of covariance matrices. We choose the FD over the Kullback-Leibler (KL) divergence for two reasons: a) the Wasserstein distance is still an appropriate distance when distributions have non-overlapping support and b) the KL is computationally more unstable since the covariance matrices need to be inverted.

2.4 -body mass map generation challenge

Using the FD, we define a score for each statistic as

(2)

where are computed on the log statistics, i.e: , . Along with this manuscript, we release our dataset and our evaluation procedure in the hope that further contributions will improve our solution. All information can be found at https://github.com/nperraud/3DcosmoGAN. We hope that this dataset and evaluation procedure can be a tool for the evaluation of GANs in general. Practically, these cosmological statistics are very sensitive. We observed two important properties of this problem. First, a small variation in the generated images still has an impact on the statistics. The statistics can be highly affected by high density regions of the -body data, and these regions are also the most rare in the training set. Second, while mode collapse may not directly affect the mean of the statistics, it can affect their second order moment significantly. We observed that obtaining a good statistical agreement (and hence a high score), is much more difficult than obtaining generated images that are indistinguishable for the human eye, especially for the 2-dimensional case. We found that the problems of high data volume, large dynamic range of the data, and strict requirement on good agreement in statistical properties of real and generated samples, pose significant challenge for generative models.

Interpretation of the scores

We report the scores for the 3D case in Section 4 and the web page hosting the source code also includes baseline scores for the 2D case. We would like to emphasize that these scores are mostly suitable to compare two generative models applied to the same distribution. It is a priori unclear whether these scores provide a meaningful comparison when considering two generative models trained on different distributions. We therefore refrain from relying on such scores to compare the generative models trained on 2D and 3D data since the latter gives access to correlations along the third dimension that are absent from the 2D data. Finally, while in theory, the score is unbounded and can be arbitrarily large as , its empirical evaluation is, in practice, limited by the estimation error of that depends on the number of samples used to estimate the mean vectors and the covariance matrices as well as the moments of the estimated statistics. In the case where the training set contains a large number of samples, one would expect the training score to be indicative of the score of the true distribution. One can see that the estimation error of the first term of (1) depends mostly on the variance of the statistic, while for the second term, it depends on the moments of order and . Hence, the estimated score will reflect the variance of the estimated statistics given the number of samples used. A high score therefore means less variance within the corresponding statistic.

Figure 3: Left: Sequential Generation. The image is generated patch-by-patch. Right: Upscaling GAN conditioned on neighborhood information. Given border patches B1, B2 and B3, and the down-scaled version of the patch, the generator generates the fake patch. The discriminator also receives the border patches, as well as either an original or fake patch from the generator. The principle is similar in 3D, with the difference that border cubes are used instead of border patches.
Figure 4: Details of the upsampling generator. The borders/cubes are encoded using several convolutional layers.

3 Sequential generative approach

We propose a novel approach to efficiently learn a Generative Adversarial Network model (see Section 3.1) for 2D and 3D images of arbitrary size. Our method relies on two building blocks: 1) a multi-scale model that improves the quality of the generated samples, both visually and quantitatively, by learning unique features at different scales (see Section 3.2), and 2) a training strategy that enables learning images of arbitrary size, that we call “conditioning on neighborhood” (see Section 3.3).

3.1 Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GAN) rely on two competing neural networks that are trained simultaneously: the generator , which produces new samples, and the discriminator , which attempts to distinguish them from the real ones. During training, it is the generator’s objective to fool the discriminator, while the discriminator resists by learning to accurately discriminate real and fake data. Eventually, if the optimization process is carried out successfully, the generator should improve to the point that its generated samples become indistinguishable from the real one. In practice, this optimization process is challenging and numerous variants of the original GAN approach have been proposed, many of them aiming to improve stability including e.g. [48, 22, 3]. In our work, we rely on the improved Wasserstein GAN (WGAN) approach introduced in [22]. The latter optimizes the Wasserstein distance instead of the Jensen-Shannon divergence and penalizes the norm of gradient of the critic instead of using a hard clipping as in the original WGAN [3]. The resulting objective function is

where is the data distribution and is the generator distribution implicitly defined by . The latent variable is sampled from a prior distribution , typically a uniform or a Gaussian distribution. Eventually, is defined implicitly by sampling uniformly along straight lines between pair of points sampled from the true data distribution and the generator distribution . The weight is the penalty coefficient.

3.2 Multi-scale Model

Our multi-scale approach is inspired by the Laplacian pyramid GAN [12]. We refer to three image types of different sizes, namely  5 pixels, where is a down-scaled version of and is a down-scaled version of . The multi-scale approach is shown in figure 2 and uses three different GANs, , and , all trained independently from each other, and can therefore be trained in parallel. We train GAN to learn the data distribution of images , while the GANs and are conditioned on the images produced by and , respectively. In our implementation, we take to be a normal Deep Convolution GAN (DCGAN) that learns to produce down-scaled samples of size . We design GANs and to have the following properties: 1) they produce outputs using a sequential patch-by-patch approach and 2) the output of each patch is conditioned on the neighboring patches. This procedure allows handling of high data volume, while preserving the long-range features in the data. Moreover, different GANs learn salient features at different scales, which contribute to an overall improved quality of the samples produced the final GAN . Further details regarding the implementation details are provided in Appendix C.

3.3 Conditioning on Neighborhoods

The limited amount of memory available to train a GAN generator makes it impractical to directly produce large image samples. Using current modern GPUs with 16GB of RAM and a state-of-the-art network architecture, the maximum sample size we were allowed to use was , which is far from our target taken to be . In order to circumvent this limitation, we propose a new approach that produces the full image (of size ) patch-by-patch, each patch being of smaller size ( in our case). This approach is reminiscent of the Pixel-CNN [58], where 2D images are generated pixel-by-pixel, rather than the entire picture being generated at once. Instead of assuming that the distribution of each patch is independent of the surrounding context, we model it as a function of the neighboring patches. The generation process is done using a raster-scan order, which implies that a patch depends on the neighboring patches produced before the current patch. The process illustrated in Figure 3 is for the 2D case with three neighboring patches; the generalization to three dimensions is straightforward as it simply requires seven neighboring 3D patches.

Figure 6: Statistics of the -downscaled cubes. The fake samples are generated from WGAN . The power spectrum density is shown in units of h Mpc, where h = H/100 corresponds to the Hubble parameter.
Figure 7: Up-scaling a cube to . Left and right: middle slices from real and fake samples. The fake is generated by conditioning the WGAN on the real down-scaled cube (center). Video: https://youtu.be/IIPoK8slShU
Figure 5: -down-scaled sample generation ( cubes). Middle slice from real and fake WGAN samples. Video: https://youtu.be/uLwrF73wX2w
Figure 6: Statistics of the -downscaled cubes. The fake samples are generated from WGAN . The power spectrum density is shown in units of h Mpc, where h = H/100 corresponds to the Hubble parameter.
Figure 7: Up-scaling a cube to . Left and right: middle slices from real and fake samples. The fake is generated by conditioning the WGAN on the real down-scaled cube (center). Video: https://youtu.be/IIPoK8slShU
Figure 8: Statistics of the samples produced by . The fake samples are generated by conditioning WGAN on the real down-scaled samples. The power spectrum density is shown in units of h Mpc, where h = H/100 corresponds to the Hubble parameter.
Figure 5: -down-scaled sample generation ( cubes). Middle slice from real and fake WGAN samples. Video: https://youtu.be/uLwrF73wX2w

In the generator, the neighborhood information, i.e. the borders ( patches for 2D, cubes for 3D), is first encoded using several convolutional layers. Then it is concatenated with the latent variable, is inputed to a fully connected layer before being reshaped into a 3D image. At this point, the down-sampled version of the image is concatenated. After a few extra convolutional layers, the generator produces the lowest rightmost patch with the aim of making it indistinguishable to the patch from the real data. The generator architecture is detailed in Figure 4. In the case where there is no neighborhood information available, such as at the boundary of a cube, we pad with zeros. The discriminator is given images containing four patches where the lower right patch is either the real data or the fake data generated by the generator. The generator only produces patches of size , irrespective of the size of the original image. This way this method can scale to any image size, which is a great advantage. The discriminator only has access to a limited part of the image and ignores the long-range dependencies. This issue, however, is handled by the multi-scale approach described in the previous section. We actually tried a model only conditioned on the neighborhoods as detailed in Appendix 4.3. It ended up performing significantly worse than using the multi-scale approach.

Figure 9: Upsampling a cube to . Left and right: middle slices from real and fake samples. The WGAN that generates the fake sample is conditioned on the real image down-scaled to (center). Video: https://youtu.be/guUYP8ZOoVU
Figure 10: Statistics for the WGAN producing fake cubes. The fake samples are generated by conditioning WGAN on the real cube down-scaled to . The power spectrum density is shown in units of h Mpc, where h = H/100 corresponds to the Hubble parameter.
Figure 9: Upsampling a cube to . Left and right: middle slices from real and fake samples. The WGAN that generates the fake sample is conditioned on the real image down-scaled to (center). Video: https://youtu.be/guUYP8ZOoVU

4 Experimental results

Our approach relies on a recursive approach that progressively builds a 3D cube, starting from a low-resolution and upsampling it at every step. We detail and analyze each step separately in order to understand the impact of each of these steps on the final performance of the full model. We also compare the results of our multi-scale approach to a simpler uni-scale model. Additional details regarding the network architectures and various implementation choices are available in Appendix C.

4.1 Scale by scale analysis of the pipeline

In the following, we describe our model that relies on three different GANs, namely and , to generate samples at distinct resolutions. We detail each step of the generation process below.

Step 1: Low-scale generation (Latent Code to )

The low scale generation of a sample of size is performed by the WGAN . The architecture of both the generator and the discriminator is composed a 3D convolutional layers with kernels of size . We use leaky ReLu non-linearity. Further details can be found in Table 1 of the appendix.

Figure 8 shows the middle slice from 3D samples drawn from the generator of compared to real samples. In Figure 8, we additionally plot our evaluation statistics for the 30 samples corresponding to the total number of -body simulations used to build the dataset. The generated samples drawn from are generally similar to the true data, both from a visual and from a statistical point of view, although one can observe slight disagreements at higher frequencies. Note that the low number of samples and the size of each of them does not allow us to compute very accurate statistics, hence limiting our evaluation.

Step 2: Up-scale ( to )

Upscaling the sample size from to is performed by the WGAN . The architecture is similar to (see Table 2 in the appendix), except that the border patches are first encoded using three 3D convolutional layers and then concatenated with the latent variables before being fed to the generator.

In order to visualize the quality of up-scaling achieved by this first up-sampling step independently from the rest of the pipeline, we first down-scale the real samples to , and then provide them as input to the WGAN . We then observe the result of the up-scaling to . Figure 8 shows slices from some generated samples, as well as the real down-scaled image and the real image. We observe a clear resemblance between the up-scaled fake samples and the real samples. The statistics for this step of the generation process are shown in Figure 8. We observe more significant discrepancies for features that rarely occur in the training set such as for large peaks. This is however not surprising as learning from few examples is in general a difficult problem.

Step 3: Up-scale ( to )

The final upscaling from to is performed by the WGAN . The architecture of both the generator and the discriminator of is composed of eight 3D convolutional layers with inception modules [55]. The inception modules have filters of three different sizes: , and . The input tensor is convolved with all three types of filters, using padding to keep the output shape the same as the input shape. Eventually, the three outputs are summed to recover the desired number of output channels.

To visualize the quality of up-scaling achieved by the final up sampling step, we down-scale the real samples to , and then provide them as inputs to the WGAN . Figure 10 shows the middle slices of two real and fake samples . Although, the up-sampling factor is , the WGAN is able to produce convincing samples even in terms of high frequency components.

Figure 11: Middle slice from real and generated samples. The GAN-generated samples are produced using the full multi-scale pipeline. Videos:
- 32-scale: https://youtu.be/uLwrF73wX2w
- 64-scale: https://youtu.be/xI2cUuk3DRc
- 256-scale: https://youtu.be/nWXP6DVEalA
Figure 12: Summary statistics of real and GAN-generated images using the full multi-scale pipeline. The power spectrum density is shown in units of h Mpc, where h = H/100 corresponds to the Hubble parameter.
Figure 13: Up-scaling patches in 3-steps, using 3 different WGANs
Figure 11: Middle slice from real and generated samples. The GAN-generated samples are produced using the full multi-scale pipeline. Videos:
- 32-scale: https://youtu.be/uLwrF73wX2w
- 64-scale: https://youtu.be/xI2cUuk3DRc
- 256-scale: https://youtu.be/nWXP6DVEalA

4.2 Full Multi-scale Pipeline

Sample generation

The generation process used to produce new samples proceeds as follows. First, a latent variable is randomly drawn from the prior distribution and is fed to which produces a low-resolution cube. The latter is then upsampled by . At first, all border patches shown in Figure 3 are set to zero. Then the cube is built recursively (in steps) where at each step, the previously generated patches are re-fed as borders into the generator. The generation of the full cube is done in a similar fashion by . Note that this last step requires the generation of patches/smaller cubes. An illustration, adapted to the 2D case for simplicity, is shown in Figure 13. The full generation process takes around seconds to produce one sample of size using a single GPU node with 24 cores compared to approximately 6-8 hours for a fast and approximate L-Picola [26] simulator running on two identical nodes.

Quantitative results

Figure 13 shows a few slices from a 3D fake image generated using the full pipeline, alongside a random real image. Figure 13 shows the summary statistics of 500 GAN generated samples, compared to that of real samples. The visual agreement between the real and generated samples is good, although a trained human eye can still distinguish between real and fake samples. In particular, a careful visualization reveals that the transitions between the different patches are still imperfect and that the generated samples have less long range filaments than the true samples. The summary statistics agree very well for the middle range of mass density, with slight disagreements at the extremes. The shape of the power spectrum density (PSD) matches well, but the overall amplitude is too high for most of the range. Naturally, we would expect the error of the multi-scale pipeline to be the result of the accumulation of errors from the three upsampling steps. In practice, we observe a lot of similarities between the statistics shown in Figure 10 (from the last upscaling step) and in the Figure 13 (from the full pipeline). Finally, Table 13 presents the scores obtained for the 30 cubes of size using the full multi-scale pipeline as generated by the full GAN sequence. As explained in 2.4, the reference score gives an indication of the variability within the training set, i.e, how similar are two independent sample sets from the training set. The reference mass histogram score is much higher than the PSD and the Peak histogram due to the fact that this statistic has in general much less variance. For that reason, it is probably easier to estimate, as indicated by the score of our pipeline. Cosmological analyses typically require the precision of less than few percent on these summary statistics, which is achieved by the GAN method only for specific scales, peak and density values.

The scores of multiscale approach are much higher than the ones of the simpler single-scale approach described in the following section.

Figure 14: Middle slice from real and generated samples. The GAN-generated samples are produced using the full uni-scale pipeline. Video: https://youtu.be/fxZEQHEGunA
Figure 15: Summary statistics of real and GAN-generated images using the full uni-scale pipeline. The power spectrum density is shown in units of h Mpc, where h = H/100 corresponds to the Hubble parameter.
Figure 14: Middle slice from real and generated samples. The GAN-generated samples are produced using the full uni-scale pipeline. Video: https://youtu.be/fxZEQHEGunA

4.3 Comparison to the single scale model

In order to evaluate the effectiveness of the multi-scale approach, we compare our model to a uni-scale variant that is not conditioned on the down-sampled image. Here each patch is generated using only its neighboring border patches and a latent variable that is drawn from the prior distribution. More simply, it is a direct implementation of the principle displayed in Figure 3 left in 3D, where the discriminator is not conditioned on the down-sampled image. An important issue with the uni-scale model is that the discriminator never s more than pixels at once. Hence, it is likely to fail to capture long-range correlations. Practically, we observed that the training process is unstable and subject to mode collapse. At generation time. the recursive structure of the model often lead to repeating pattern. The resulting models were of bad quality as shown in Figure 15 and 15. This translates to the score presented in table 13. This experiment demonstrates that conditioning on data at lower scales does play an important role in generating samples of good quality.

5 Conclusion

In this work we introduced a new benchmark for the generation of 3D -body simulations using deep generative models. The dataset is made publicly available and contains matter density distributions represented as cubes of voxels. While the performance of the generative model can be measured by visual inspection of the generated samples, as commonly done on datasets of natural images, we also offer a more principled alternative based on a number of summary statistics that are commonly used in cosmology. Our investigation into this problem has revealed that several factors make this task challenging, including: (i) the sheer volume of each data sample, which is not straightforwardly tractable using conventional GAN architectures, (ii) the large dynamic range of the data that spans several orders of magnitude; which requires a custom-designed transformation of the voxel values, and (iii) the need for high accuracy required for the model to be practically usable for a real cosmological study. Adding to the difficulties of (i) and (ii), this also requires accurately capturing features that are rare in the training set.

As a first baseline result for the newly introduced benchmark, we proposed a new method to train a deep generative model on 3D images. We split the generation process into the generation of smaller patches as well as condition on neighboring patches. We also apply a multi-scale approach to learn multiple WGANs at different image resolutions, each capturing salient features at different scales. This approach is inspired by Laplacian pyramid GAN [12] and by PixelCNN [58], which have both been developed for 2D data.

We find that the proposed baseline produces -body cubes with good visual quality compared to the training data, but significant differences can still be perceived. Overall, the summary statistics between real and generated data match well, but notable differences are present for high voxel values in the mass and peak histograms. The power spectrum has the expected shape, with amplitude that is too high for most of the values. The overall level of agreement is promising, but can not yet be considered as sufficient for practical applications in cosmology. Further development will be needed to achieve this goal; in order to encourage it, we have made the dataset and the code publicly available.

In our current model, the discriminator only has access to a partial view of the final image. The dependencies at small scale that may exist between distant patches are therefore not captured by the discriminator. Extending this model to allow the discriminator to have a more global view would be the next logical extension of this work. We have also observed empirically that the extreme right tail of the histogram is often not fully captured by the generator. Designing architectures that would help the generative model to handle large dynamic range in the data could further improve performance. One could also get further inspiration from the literature on generative models for video data, such as [59, 62, 49]. Given the observations made in our experiments, one might for instance expect that the two stage approach suggested in [49] could address some of the problem seen with the right tail of the distribution.

Another interesting research direction would be to condition the generation process on cosmic time or cosmological parameters. One could for instance rely on a conditional GAN model such as [40].

Appendix A Input Data Transformation

The general practice in machine learning is that the input data is first standardized before giving it to any machine learning model. This preprocessing step is important as many neural networks and optimization blocks are scale-dependent, resulting is most architectures working optimally only when the data is appropriately scaled. Problematically, because of the physical law of gravity, most of the universe is empty, while most of the matter is concentrated in a few small areas and filaments. The dataset had the minimum value of and the maximum value of , with most of the voxels concentrated close to zero, and significantly skewed towards the smaller values and has an elongated tail towards the larger ones. Even with standardization, it is difficult for a generative model to learn very sparse and skewed distributions. Hence we transform the data using a special function, in a slightly different way than [47].

In order to preserve the sensitivity of the smaller values, a logarithm-based transformation function is a good candidate. Nevertheless, to maintain the sensitivity to the large values, we should favor a linear function. In our attempt to coincide the best of the two regimes, we design a function that is logarithmic for lower values and linear for large one, i.e after a specified cutoff value. The exact forward transformation function, is defined as:

(3)

where

(4)

As a result, the backward transformation function reads

(5)

where

(6)

Here and are selected hyper-parameters. For our experiments we found and to be good candidates. After the forward transformation, the distribution of the data becomes similar to a one-sided Laplacian distribution. We always invert this transformation once new data is generated, before calculating the summary statistics.

Appendix B -body Training Set Augmentation

As -body simulations are very expensive, we need to make sure to use all the information available using an optimal dataset augmentation. To augment the training set, the cubes are randomly rotated by multiples of 90 degrees and randomly translated along one of the 3 axes. The cosmological principle states that the spatial distribution of matter in the universe is homogeneous and isotropic when viewed on a large enough scale. As a consequence there should be no observable irregularities in the large scale structure over the course of evolution of the matter field that was initially laid down by the Big Bang [14]. Hence, the rotational and translational augmentations do not alter the data distribution that we are trying to model in any way. Moreover, we note that use circular translation in our augmentation scheme. This is possible because -body simulations are created using the periodic boundary condition: a particle exiting the box on one side enters it immediately on the opposite side. Forces follow the same principle. This prevents the particles from collapsing to the middle of the box under gravity. These augmentations are important given that we only have 30 -body cubes in our training set.

Appendix C Architecture & implementation details

Implementation details

We used Python and Tensorflow to code the models which are trained on GPUs with 16GB of memory. All the GANs are WGANs with a Gaussian prior distribution. Using a single GPU, it takes around seconds to produce one sample of size compared to approximately 30 hours for a precise N-body simulator running on two nodes with 24 cores and a GPU, such as PkdGrav3 [44]. In this project, a fast and approximate L-Picola [26] simulator was used, with approximately 6 hours of runtime on two nodes.

The batch size is set to 8 for all the experiments. All Wasserstein GANs were trained with using a gradient penalty loss with , as described in [22]. We use RMSprop with a learning rate for both the generator and discriminator. The discriminator was updated times per generator update.

Network architectures

The neural networks used in our experiments are variants of deep convolutional networks with inception modules and/or residual connections.

All weights were initialized using Xavier Initializer, except the bias that was initialized to . We used leaky ReLu and spectral normalization [41] to stabilize the network. The architectures are detailed in Tables 12 and 3.

Handling the input border is an architectural challenge in itself. We used two different solutions to overcome this issue and use one of them for each scale.

Generator .

The generator possesses a convolutional encoder for the borders. Once the borders are encoded, we concatenate them with the latent variable. The downsampled image simply concatenated at the first convolution layer (see Table 2).

Generator .

The generator does not possess an encoder, but utilize the border directly as extra channels. As a result, the generator convolution all have a stride of . The downsampled image is first upsampled using a simple transposed convolution with a constant kernel and then concatenated as an input. The latent variable is of size to avoid a memory consuming linear layer. Eventually, as the convolution is shift-invariant, we perform two transformations to the input borders before feeding them to the generator. As a results, we flip them to obtain a correct alignment with produced corner. Furthermore, to improve the capacity of the networks without increasing to much the number of parameters and channels, we use an inception inspired module. The module is simply composed of 3 convolutions (, , ) in parallel followed by a merging convolution. Finally, to further help the discriminator, we also feed some PSD estimation at the beginning of its linear layer (see Table 3).

Training stabilization using a regularizer.

While it has been shown that the gradient penalty loss of the Wasserstein GAN helps in stabilizing the training process [22], this term does not prevent the discriminator to saturate. For example, when the discriminator has a high final bias, its output will be very large for both real and fake sample, yet its loss might be controlled as the output of real samples is subtracted from the one of the fake samples. In practice, we noticed that when this behavior was happening, the learning process of the generator was hindered and the produced samples were of worse quality. In order to circumvent this issue, we added a second regularization term:

(7)

Our idea was that the regularization should kick in only to prevent the un-desirable effect and should not affect the rest of the training. If the discriminator is doing a good job, then should be positive and negative nullifying the regularization. On the contrary if both of these term are of the same sign, the output will be penalized quadratically forcing it to remain close to . While the effect of this second regularization term is still unclear to us, it did help to stabilize our optimization procedure for the multi-scale approach.

As we release our code and entire pipeline, we encourage the reader to check it for additional details.

Operation Parameter size Output Shape
Generator
Input
Dense
Reshape
TrConv 3D (Sride 2)
LReLu
TrConv 3D (Sride 2)
LReLu
TrConv 3D (Sride 2)
LReLu
TrConv 3D (Sride 1)
LReLu
TrConv 3D (Sride 1)
Discriminator
Input generated image
Conv 3D (Sride 2)
LReLu
Conv 3D (Sride 2)
LReLu
Conv 3D (Sride 1)
LReLu
Conv 3D (Sride 1)
LReLu
Conv 3D (Sride 1)
LReLu
Reshape
Dense
Table 1: Detailled architecture of the low resolution GAN . .
Operation Parameter size Output Shape
Generator
Input borders
Conv 3D (Sride 2)
Conv 3D (Sride 2)
Conv 3D (Sride 2)
Reshape
Input
Concatenation
Dense
Reshape
Input downsampled corner
Input
Concatention
TrConv 3D (Sride 1)
LReLu
TrConv 3D (Sride 1)
LReLu
TrConv 3D (Sride 2)
LReLu
TrConv 3D (Sride 1)
LReLu
TrConv 3D (Sride 1)
Discriminator
Input generated image
Input borders
Reshape to a cube
Input smooth image
Concatenation (+ diff)
Conv 3D (Sride 1)
LReLu
Conv 3D (Sride 2)
LReLu
Conv 3D (Sride 2)
LReLu
Conv 3D (Sride 1)
LReLu
Conv 3D (Sride 2)
LReLu
Reshape
Dense
Table 2: Detailled architecture of UpscaleGAN . .
Operation Parameter size Output Shape
Generator
Input -
Input smooth image -
Input borders -
Concatention -
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
ReLu -
Discriminator
Input generated image -
Input borders -
Reshape to a cube -
Input smooth image
Concatenation (+ diff)
InConv 3D (Sride 2) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 1) *
LReLu
InConv 3D (Sride 2) *
LReLu
InConv 3D (Sride 2) *
LReLu
Reshape -
Compute PSD -
Concatenate -
Dense
LReLu
Dense
LReLu
Dense
Table 3: Detailed architecture of UpscaleGAN . . The parameter shape of the inception convolution written InConv is too large be written in the table.
{backmatter}

Abbreviations

  • GAN: Generative adversarial networks

  • WGAN: Wasserstein Generative adversarial networks

  • DCNN: Deep convolutional neural networks

  • LSST: Large Synoptic Survey Telescope

  • CDM: Cold Dark Matter

  • GPU: Graphics Processing Unit

  • PSD: Power Spectral Density

  • FD: Fréchet Distance

  • KL: Kullback-Leibler

Declarations

Availability of data and material

The data and code to reproduce the experiment of this study are available at https://github.com/nperraud/3DcosmoGAN and https://zenodo.org/record/1464832.

Competing interests

The authors declare that they have no competing interests.

Funding

This work was supported by the Swiss Data Science Centre (SDSC), project sd01 - DLOC: Deep Learning for Observational Cosmology, and grant number 200021_169130 and PZ00P2_161363 from the Swiss National Science Foundation. The funding bodies had no involvement in the design of the study, collection, analysis, and interpretation of data, or writing the manuscript.

Author’s contributions

NP performed the experiment design and the full analysis. NP and AS contributed to the implementation of the algorithms. Initial implementation of the algorithm was presented in Master’s thesis by AS titled “Scalable Generative Models For Cosmology”, supervised by NP, AL, and TK, with advice from TH and AR. NP created the challenge. TK and AL initiated the study as the Principle Investigators of the DLOC program at the SDSC, provided the resources used in the analysis, performed the experiment design, and provided direct guidance and supervision. JF and RS prepared the -body simulation dataset. AR and TH contributed to the development of the ideas and the proposal. All authors read and approved the final manuscript.

Acknowledgments

We thank Adam Amara for initial discussions on the project idea. We thank Janis Fluri and Raphael Sgier for help with generating the data. We acknowledge the support of the IT service of the Piz Daint computing cluster at the Swiss National Supercomputing Center (CSCS), as well as the Leonhard and Euler clusters at ETH Zurich.

Footnotes

  1. www.darkenergysurvey.org
  2. www.euclid-ec.org
  3. www.lsst.org
  4. For cosmological scale, the distance is measured in megaparsec (Mpc) ().
  5. We will use the abbreviations and for conciseness.

References

  1. T.M.C. Abbott, F.B. Abdalla, A. Alarcon, J. Aleksić, S. Allam, S. Allen, A. Amara, J. Annis, J. Asorey and S. Avila (2018) Dark Energy Survey year 1 results: Cosmological constraints from galaxy clustering and weak lensing. Physical Review D 98 (4), pp. 043526. Cited by: §2.1.
  2. P. Achlioptas, O. Diamanti, I. Mitliagkas and L. Guibas (2018) Learning Representations and Generative Models for 3D Point Clouds. Cited by: §1.1.
  3. M. Arjovsky, S. Chintala and L. Bottou (2017) Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. Cited by: §3.1.
  4. A. Barreira, D. Nelson, A. Pillepich, V. Springel, F. Schmidt, R. Pakmor, L. Hernquist and M. Vogelsberger (2019-07) Separate Universe Simulations with IllustrisTNG: baryonic effects on power spectrum responses and higher-order statistics. \mnras, pp. 1784. External Links: Document, 1904.02070 Cited by: §2.1.
  5. J. R. Bond and D. Pogosyan (1996) How filaments of galaxies are woven into the cosmic web. Cited by: §1, §2.1.
  6. A. Borji (2019) Pros and cons of gan evaluation measures. Computer Vision and Image Understanding 179, pp. 41–65. Cited by: §2.3.
  7. M. Boylan-Kolchin, V. Springel, S. D. M. White, A. Jenkins and G. Lemson (2009) Resolving cosmic structure formation with the Millennium-II Simulation. Cited by: §2.1.
  8. A. Brock, J. Donahue and K. Simonyan (2019) Large scale GAN training for high fidelity natural image synthesis. Cited by: §1.1.
  9. M. T. Busha, R. H. Wechsler, M. R. Becker, B. Erickson and A. E. Evrard (2013) Catalog Production for the DES Blind Cosmology Challenge. Cited by: §2.1.
  10. C. Chang, A. Pujol, B. Mawdsley, D. Bacon, J. Elvin-Poole, P. Melchior, A. Kovács, B. Jain, B. Leistedt and T. Giannantonio (2018-04) Dark Energy Survey Year 1 results: curved-sky weak lensing mass map. \mnras 475 (3), pp. 3165–3190. External Links: Document, 1708.01535 Cited by: §1.1.
  11. P. Coles and L. Y. Chiang (2000) Characterizing the nonlinear growth of large-scale structure in the Universe. Cited by: §2.1.
  12. E. Denton, S. Chintala, A. Szlam and R. Fergus (2015) Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. Cited by: §1.1, §3.2, §5.
  13. J. P. Dietrich, N. Werner, D. Clowe, A. Finoguenov, T. Kitching, L. Miller and A. Simionescu (2012) A filament of dark matter between two clusters of galaxies. Cited by: §1, §2.1.
  14. S. Dodelson (2003) Modern cosmology. Cited by: Appendix B.
  15. D. Dowson and B. Landau (1982) The Fréchet distance between multivariate normal distributions. Journal of multivariate analysis 12 (3), pp. 450–455. Cited by: §2.3.
  16. H. Fan, H. Su and L. J. Guibas (2017) A point set generation network for 3D object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 605–613. Cited by: §1.1.
  17. J. E. Forero-Romero, Y. Hoffman, S. Gottlöber, A. Klypin and G. Yepes (2009) A dynamical classification of the cosmic web. Cited by: §2.1.
  18. P. Fosalba, E. Gaztañaga, F. J. Castander and M. Crocce (2015) The MICE Grand Challenge light-cone simulation - III. Galaxy lensing mocks from all-sky lensing maps. Cited by: §2.1.
  19. M. Fréchet (1957) Sur la distance de deux lois de probabilité. Comptes rendus hebdomadaires des séances de l’académie des sciences 244 (6), pp. 689–692. Cited by: §2.3.
  20. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial nets. Cited by: §1.1, §1.
  21. P. Grnarova, K. Y. Levy, A. Lucchi, N. Perraudin, I. Goodfellow, T. Hofmann and A. Krause (2019) A domain agnostic measure for monitoring and evaluating gans. In Advances in Neural Information Processing Systems, pp. 12069–12079. Cited by: §2.3.
  22. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin and A. C. Courville (2017) Improved training of wasserstein GANs. In Advances in neural information processing systems, pp. 5767–5777. Cited by: Appendix C, Appendix C, §3.1.
  23. S. He, Y. Li, Y. Feng, S. Ho, S. Ravanbakhsh, W. Chen and B. Póczos (2018-11) Learning to Predict the Cosmological Structure Formation. arXiv e-prints, pp. arXiv:1811.06533. External Links: 1811.06533 Cited by: §1.1.
  24. M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler and S. Hochreiter (2017) GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637. Cited by: §2.3.
  25. H. Hildebrandt, M. Viola, C. Heymans, S. Joudaki, K. Kuijken, C. Blake, T. Erben, B. Joachimi, D. Klaes, L. Miller, C.B. Morrison, R. Nakajima, G. V. Kleijn, A. Amon, A. Choi, G. Covone, J.T.A. de Jong, A. Dvornik, I. F. Conti, A. Grado, J. Harnois-Déraps, R. Herbonnet, H. Hoekstra, F. Köhlinger, J. McFarland, A. Mead, J. Merten, N. Napolitano, J.A. Peacock, M. Radovich, P. Schneider, P. Simon, E.A. Valentijn, J.L. van den Busch, E. van Uitert and L. V. Waerbeke (2017) KiDS-450: Cosmological parameter constraints from tomographic weak gravitational lensing.. Cited by: §2.1.
  26. C. Howlett, M. Manera and W. J. Percival (2015) L-PICOLA: A parallel code for fast dark matter simulation. Cited by: Appendix C, §2.2, §4.2.
  27. H. Huang, T. Eifler, R. Mandelbaum and S. Dodelson (2019-09) Modelling baryonic physics in future weak lensing surveys. \mnras 488 (2), pp. 1652–1678. External Links: Document, 1809.01146 Cited by: §2.1.
  28. H. Ishikawa (2017) Globally and locally consistent image completion. Cited by: §1.1.
  29. S. Joudaki, A. Mead, C. Blake, A. Choi, J. de Jong, T. Erben, I. F. Conti, R. Herbonnet, C. Heymans, H. Hildebrandt, H. Hoekstra, B. Joachimi, D. Klaes, F. Köhlinger, K. Kuijken, J. McFarland, L. Miller, P. Schneider and M. Viola (2017) KiDS-450: testing extensions to the standard cosmological model. Cited by: §2.1.
  30. T. Kacprzak, D. Kirk, O. Friedrich, A. Amara, A. Refregier, L. Marian, J. Dietrich, E. Suchyta, J. Aleksić and D. Bacon (2016) Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data. Monthly Notices of the Royal Astronomical Society 463 (4), pp. 3653–3673. Cited by: item 3.
  31. T. Karras, T. Aila, S. Laine and J. Lehtinen (2018) Progressive Growing of GANs for Improved Quality, Stability, and Variation. Cited by: §1.1, §1.
  32. D. P. Kingma and M. Welling (2014) Auto-Encoding Variational Bayes. Cited by: §1.1.
  33. D. Kodi Ramanah, T. Charnock and G. Lavaux (2019-03) Painting halos from 3D dark matter fields using Wasserstein mapping networks. arXiv e-prints, pp. arXiv:1903.10524. External Links: 1903.10524 Cited by: §1.1.
  34. M. Kuhlen, M. Vogelsberger and R. Angulo (2012-11) Numerical simulations of the dark universe: State of the art and the next decade. Physics of the Dark Universe 1 (1-2), pp. 50–93. External Links: Document, 1209.5745 Cited by: §1.
  35. W. Lai, J. Huang, N. Ahuja and M. Yang (2017) Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 624–632. Cited by: §1.1.
  36. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz and Z. Wang (2017) Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690. Cited by: §1.1.
  37. M. Lučić, M. Tschannen, M. Ritter, X. Zhai, O. Bachem and S. Gelly (2019) High-fidelity image generation with fewer labels. pp. 4183–4192. Cited by: §1.1.
  38. N. Martinet, P. Schneider, H. Hildebrandt, H. Shan, M. Asgari, J. P. Dietrich, J. Harnois-Déraps, T. Erben, A. Grado and C. Heymans (2017) KiDS-450: cosmological constraints from weak-lensing peak statistics–II: Inference from shear peaks using N-body simulations. Monthly Notices of the Royal Astronomical Society 474 (1), pp. 712–730. Cited by: item 3.
  39. A. J. Mead, J. A. Peacock, C. Heymans, S. Joudaki and A. F. Heavens (2015-12) An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models. \mnras 454 (2), pp. 1958–1975. External Links: Document, 1505.07833 Cited by: §2.1.
  40. M. Mirza and S. Osindero (2014-11) Conditional Generative Adversarial Nets. arXiv e-prints, pp. arXiv:1411.1784. External Links: 1411.1784 Cited by: §5.
  41. T. Miyato, T. Kataoka, M. Koyama and Y. Yoshida (2018) Spectral Normalization for Generative Adversarial Networks. Cited by: Appendix C.
  42. L. Mosser, O. Dubrule and M. J. Blunt (2017) Reconstruction of three-dimensional porous media using generative adversarial neural networks. Physical Review E 96 (4), pp. 043309. Cited by: §1.1.
  43. M. Mustafa, D. Bard, W. Bhimji, Z. Lukić, R. Al-Rfou and J. M. Kratochvil (2019-05) CosmoGAN: creating high-fidelity weak lensing convergence maps using Generative Adversarial Networks. Computational Astrophysics and Cosmology 6 (1), pp. 1. External Links: Document, 1706.02390 Cited by: §1.1, §1, §1.
  44. D. Potter, J. Stadel and R. Teyssier (2017) PKDGRAV3: beyond trillion particle cosmological simulations for the next era of galaxy surveys. Cited by: Appendix C, §1, §2.1.
  45. J. Regier, J. McAuliffe and Prabhat (2015) A deep generative model for astronomical images of galaxies. Cited by: §1.1.
  46. D. M. Reiman and B. E. Göhre (2019-05) Deblending galaxy superpositions with branched generative adversarial networks. \mnras 485 (2), pp. 2617–2627. External Links: Document, 1810.10098 Cited by: §1.1.
  47. A. C. Rodríguez, T. Kacprzak, A. Lucchi, A. Amara, R. Sgier, J. Fluri, T. Hofmann and A. Réfrégier (2018) Fast cosmic web simulations with generative adversarial networks. Computational Astrophysics and Cosmology 5 (1), pp. 4. Cited by: Appendix A, §1.1, §1, §1, §2.3.
  48. K. Roth, A. Lucchi, S. Nowozin and T. Hofmann (2017) Stabilizing training of generative adversarial networks through regularization. In Advances in neural information processing systems, pp. 2018–2028. Cited by: §3.1.
  49. M. Saito, E. Matsumoto and S. Saito (2017) Temporal generative adversarial nets with singular value clipping. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2830–2839. Cited by: §5.
  50. K. Schawinski, C. Zhang, H. Zhang, L. Fowler and G. K. Santhanam (2017) Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit. Cited by: §1.1.
  51. J. Schmalzing, M. Kerscher and T. Buchert (1996-01) Minkowski Functionals in Cosmology. In Dark Matter in the Universe, S. Bonometto, J. R. Primack and A. Provenzale (Eds.), pp. 281. External Links: astro-ph/9508154 Cited by: §1.1.
  52. A. Schneider, R. Teyssier, D. Potter, J. Stadel, J. Onions, D. S. Reed, R. E. Smith, V. Springel, F. R. Pearce and R. Scoccimarro (2016-04) Matter power spectrum and the challenge of percent accuracy. \jcap 2016 (4), pp. 047. External Links: Document, 1503.05920 Cited by: §2.1.
  53. V. Springel, S. D. M. White, A. Jenkins, C. S. Frenk, N. Yoshida, L. Gao, J. Navarro, R. Thacker, D. Croton, J. Helly, J. A. Peacock, S. Cole, P. Thomas, H. Couchman, A. Evrard, J. Colberg and F. Pearce (2005-06) Simulations of the formation, evolution and clustering of galaxies and quasars. nature 435, pp. 629–636. External Links: astro-ph/0504097, Document Cited by: §1, §2.1.
  54. V. Springel (2005) The cosmological simulation code GADGET-2. Cited by: §2.1.
  55. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich (2014) Going deeper with convolutions. Cited by: §4.1.
  56. R. Teyssier, S. Pires, S. Prunet, D. Aubert, C. Pichon, A. Amara, K. Benabed, S. Colombi, A. Refregier and J. L. Starck (2009) Full-sky weak-lensing simulation with 70 billion particles. Cited by: §2.1.
  57. T. Tröster, C. Ferguson, J. Harnois-Déraps and I. G. McCarthy (2019-07) Painting with baryons: augmenting N-body simulations with gas using deep generative models. \mnras 487 (1), pp. L24–L29. External Links: Document, 1903.12173 Cited by: §1.1.
  58. A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves and K. Kavukcuoglu (2016) Conditional image generation with PixelCNN decoders. Cited by: §1.1, §3.3, §5.
  59. C. Vondrick, H. Pirsiavash and A. Torralba (2016) Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pp. 613–621. Cited by: §5.
  60. X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao and C. Change Loy (2018) Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 0–0. Cited by: §1.1.
  61. J. Wu, C. Zhang, T. Xue, B. Freeman and J. Tenenbaum (2016) Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In Advances in neural information processing systems, pp. 82–90. Cited by: §1.1.
  62. W. Xiong, W. Luo, L. Ma, W. Liu and J. Luo (2018-06) Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §5.
  63. J. Zhu, T. Park, P. Isola and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. Cited by: §1.1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402489
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description