1 Introduction

Fakultät für Physik und Astronomie

Ruprecht-Karls-Universität Heidelberg


im Studiengang Physik

vorgelegt von

Adrian Vollmer

geboren in Ochsenfurt

[2cm] 2011

Forecasting Constraints on the Evolution of the Hubble Parameter and the Growth Function by Future Weak Lensing Surveys

This diploma thesis has been carried out by

Adrian Vollmer

at the

Institute for Theoretical Physics

under the supervision of

Prof. Luca Amendola

Zusammenfassung. Die kosmologische Information, die im Signal des schwache-Gravitationslinsen-Effekts verborgen ist, lässt sich mit Hilfe des Potenzspektrums der sogenannten Konvergenz analysieren. Wir verwenden den Fisher-Information-Formalismus mit dem Konvergenz-Potenzspektrum als Observable, um abzuschätzen, wie gut zukünftige Vermessungen schwacher Gravitationslinsen die Expansionrate und die Wachstumsfunktion als Funktionen der Rotverschiebung einschränken können, ohne eine bestimmtes Modell zu deren Parametrisierung zu Grunde zu legen. Hierfür unterteilen wir den Rotverschiebungsraum in Bins ein und interpolieren diese beiden Funktionen linear zwischen den Mittelpunkten der Rotverschiebungsbins als Stützpunkte, wobei wir für deren Berechnung ein Bezugsmodell verwenden. Gleichzeitig verwenden wir diese Bins für Potenzspektrum-Tomographie, wo wir nicht nur das Potenzspektrum in jedem Bin sondern auch deren Kreuzkorrelation analysieren, um die extrahierte Information zu maximieren.

Wir stellen fest, dass eine kleine Anzahl von Bins bei der gegebenen Präzision der photometrischen Rotverschiebungsmessungen ausreicht, um den Großteil der Information zu erlangen. Außerdem stellt sich heraus, dass die prognostizierten Einschränkungen vergleichbar sind mit derzeitigen Einschränkungen durch Beobachtungen von Clustern im Röntgenbereich. Die Daten von schwachen Gravitationslinsen alleine könnten manche modifizierte Gravitationstheorien nur auf dem Niveau ausschließen, aber wenn man den Prior von Vermessungen der kosmischen Hintergrundstrahlung berücksichtigt, könnte sich dies zu einem Niveau verbessern.

Abstract. The cosmological information encapsulated within a weak lensing signal can be accessed via the power spectrum of the so called convergence. We use the Fisher information matrix formalism with the convergence power spectrum as the observable to predict how future weak lensing surveys will constrain the expansion rate and the growth function as functions of redshift without using any specific model to parameterize these two quantities. To do this, we divide redshift space into bins and linearly interpolate the functions with the centers of the redshift bins as sampling points, using a fiducial set of parameters. At the same time, we use these redshift bins for power spectrum tomography, where we analyze not only the power spectrum in each bin but also their cross-correlation in order to maximize the extracted information.

We find that a small number of bins with the given photometric redshift measurement precision is sufficient to access most of the information content and that the projected constraints are comparable to current constraints from X-ray cluster growth data. This way, the weak lensing data alone might be able to rule out some modified gravity theories only at the level, but when including priors from surveys of the cosmic microwave background radiation this would improve to a level.

\lettrine [lines=1]Cosmologists are often in error, but never in doubt. Contributed to Lev Landau Before 1962\lettrine [lines=1]I am certain that it is time to retire Landau’s quote. Michael Turner Physics Today 2001, Vol. 54 Issue 12, p. 10

Chapter 1 Introduction

1.1 Motivation

It is a truth universally acknowledged that any enlightened human being in possession of a healthy curiosity must be in want of an understanding of the world around him. In the past few decades, two mysterious phenomena have emerged in cosmology that challenge our understanding of the Universe in the post provoking way: dark matter and dark energy. The “stuff” that we observe, that makes up the galaxies and stars, the protons and the neutrons, the planets and the moons, the plants and the animals and even ourselves, everything we ever knew and all of what we thought there ever was, makes up only a meager 4% of the cosmos. At first glance, this presents a huge problem for cosmology. But in the grand picture, it might be an opportunity for all of physics to make progress. Now, for the first time in history, cosmology is inspiring particle physics via dark matter for the hunt of a new particle, bringing physics to a full circle. At the same time, dark energy is causing embarrassment in the quantum field theory department by deviating from the predicted value by 120 orders of magnitude. Closing all gaps in our knowledge about the Universe on a large scale is therefore necessary in order to understand physics and our world as a whole.

Experiments are the physicist’s way of unlocking the Universe’s secrets, but they are incredibly limited in the field of cosmology. The studied object is unique and cannot be manipulated, only be observer through large telescopes either on the ground or in space. There is no shortage of theories that could explain some of the issues that are troubling cosmologists these days, but they all look very similar and it is hard to tell them apart by looking at the sky. Subtle clues need to be picked up to succeed, and for this we need better and better surveys. To shed light onto the dark Universe, several missions are being planned, designed, or are already in progress. In the design stage in particular, it is interesting to see what kind of results one can expect depending on the experiment’s specifications.

A lot of dark energy models are degenerate in the sense that some of their predictions match or are indistinguishable by reasonable efforts. The most popular ones predict critical values for some parameters, so unless we are provided with a method that allows for infinite precision in our measurements, we will never know, for instance, whether the Universe is flat or has a tiny, but non-zero curvature that is too small to be detectable, or whether dark energy is really a cosmological constant or is time-dependent in an unnoticeable way. We hope to break at least some of the degeneracies by investigating how growth of structure in the matter distribution evolved throughout the history of the Universe.

Information about how the matter used to be distributed is naturally obtained by analyzing far away objects, since the finite speed of light allows us to look as far into the past as the Universe has been transparent. But the nature of dark matter is hindering us from getting a full picture of the matter distribution, because only 20% of all matter interacts electromagnetically. The only sensible way to detect all of matter is via its influence on the geometry of space-time itself. Distortions of background galaxies will reveal perturbations in the fabric of the cosmos through which their light has passed, unbiased in regard of matter that happens to be luminous or not. In particular, the signal coming from the so-called weak lensing effect is sensitive to the evolution of both dark energy and growth related parameters. To make the most out of this phenomenon, we “slice up” the Universe into shells in redshift space and study the correlation of these redshift bins, a technique which has been dubbed weak lensing tomography.1

1.2 Objective

By assuming the validity of Einstein’s general relativity, all observations are biased towards the standard model and the growth of structure might not have been measured correctly. Instead, we want to allow for the possibility of modified gravity. Our goal is to investigate how future weak lensing surveys like Euclid can constrain the expansion rate and the growth function without assuming a particular model parameterizing those two quantities. We first select a suitable number of redshift bins in which we divide a given galaxy distribution function. Starting from a fiducial model with a list of cosmological parameters that take on particular values determined by Wilkinson Microwave Anisotropy Probe (WMAP)2, we then take the values of and in each bin as a constant, treat them as additional cosmological parameters, and rebuild these two functions as linear interpolations between supporting points in the redshift bins.

Using these new functions, we calculate the weak lensing power spectrum in each bin as well as their cross-correlation spectra based on the non-linear matter power spectrum, which we take from fitting formulae found by other groups. The weak lensing power spectrum is then used in the powerful Fisher matrix formalism, which allows us to estimate the uncertainty of all cosmological parameters, including the values of and at the chosen redshifts. With these forecast error bars, we can compare competing theories of modified gravity to our simulated results of next generation weak lensing surveys.

1.3 Symbols and notation

We use units where the speed of light equals unity. Vectors are lower case and printed in bold (e.g. ), while matrices are upper case and printed sans serifs (e.g. ). The metric has signature +++. Derivatives are sometimes written using the comma convention: . The logarithm to base ten is denoted by , and the logarithm to base by . The Fourier transform of a function is written with a tilde: . Due to the one-to-one correspondence of the redshift and the scale factor, , we may denote the dependence on either one of those quantities interchangeably, for instance it is obvious that , so we might as well consider as a function of and write . Sometimes, the value of redshift dependent quantities like as of today is given as just , for instance. A list of commonly used symbols follows.

Scale factor
Complex ellipticity
Covariance matrix
Kronecker delta
Dirac delta function
Matter density contrast
Photometric redshift error
Dimensionless Hubble parameter,
Fisher matrix
Growth rate
Newton’s gravitational constant
Growth function
Growth index
Intrinsic galaxy ellipticity
Parameterized post-Newtonian parameter
Values of the logarithm of at redshift ,
Hubble constant
Dimensionless Hubble constant,
Jacobian matrix
Wave number
Likelihood, Lagrangian
Galaxy distribution function
Galaxy distribution function for the -th redshift bin
Average galaxy density in the -th redshift bin
Scalar perturbation spectral index
Angular galaxy density
Number of redshift bins
Reduced fractional density for component :
Fractional density for component as a function of the scale factor or redshift ( for matter, baryons, cold dark matter)
Fractional matter density today
Matter power spectrum
Convergence power spectrum
Probability of given
Scalar gravitational potentials
Comoving distance
Fluctuation amplitude at
Uncertainty of the quantity
Reionization optical depth
Cosmic time
Vector in parameter space; angular position
Equation-of-state ratio
Window function
Median redshift
Endpoint, center of the -th redshift bin

Chapter 2 Theoretical preliminaries

This section outlines the basic theory behind dark energy by giving a brief historical introduction. Furthermore, the basics of weak lensing and the Fisher information matrix formalism are explained, as well as the concept of power spectra, an immensely important tool not only in the remainder of this thesis, but in all of modern cosmology. A much more detailed treatment of the topics presented in this section can be found in many excellent standard text books, such as Carroll (2003) for the derivation of cosmology from general relativity, Weinberg (2008) and Amendola and Tsujikawa (2010) (hereafter referred to as A&T) for an up to date reference to modern cosmology, dark energy and useful statistical tools, Peebles (1980) also for the statistics and Bartelmann and Schneider (1999) for a comprehensive treatment of weak gravitational lensing.

2.1 Dark energy: A historical summary

A short while after Albert Einstein derived his famous field equations he proposed the idea of inserting a cosmological constant originally in order to allow for a non-expanding Universe, because he felt that a world that was spatially closed and static was the only logical possibility (Einstein, 1917). This view was the general consensus at the time, considering that all visible stars seemed static and other galaxies had not been discovered yet. From the first and second Friedmann equations


we can easily see, that for a static solution with , we need a component in the Universe that bears a negative pressure


with an overall positive spatial curvature that is finely tuned to


At this point, we mention that it is often useful to plug the time derivative of eq. (2.1) into eq. (2.2) to arrive at the continuity equation


It can be shown that for dust, i.e. non-relativistic baryonic or dark matter, the pressure vanishes and for radiation or ultra-relativistic matter the pressure is one third of the energy density. This means that ordinary forms of energy cannot produce the desired result. Einstein recognized that we the Lagrangian in the Hilbert-Action can be changed to


by introucing a constant . We can interpret this term as a form of energy density that does not depend on the scale factor, which leads to according to eq. (2.5). Thus in eqs. (2.1) and (2.2) can be replaced by . Then a Universe in which or satisfies eq. (2.3).

However, Einstein’s conservative view of the world did not live up to experimental facts. Only a short time later, after Edwin Hubble’s revolutionary discovery of apparently receding galaxies in 1923, Einstein readily discarded the idea of a static Universe. Because an expanding Universe does not necessarily need a cosmological constant (see fig. 2.1), the introduction of the cosmological constant was considered a mistake for decades after that (Peebles and Ratra, 2003).

It was not until the early 1960s, when had to be revived to explain a new measurement of by Allen Sandage that was more accurate than the original one by Hubble by one order of magnitude. After a proper recalibration of the distance measure, he found , causing an incompatibility of what was then concluded to be the age of the Universe () with that of the age of the oldest known star in the Milky Way, which was at the time estimated to be greater than (Sandage, 1961). Even though the numbers were slightly off according to our current knowledge, Sandage was on the right track and immediately suggested that a cosmological constant could easily resolve this issue. This was the first hint that the Universe was not only expanding, but expanding in an accelerated manner.

Figure 2.1: Solutions of the Friedmann equation demonstrating the evolution of the Universe. We have set the Hubble constant to unity for convenience, which implies that corresponds to the Hubble time . All curves pass (today) with . Top panel: From the top: The expansion history and future for our Universe with , an open matter-only Universe and the Einstein–de Sitter model . Bottom panel: Some more exotic solutions: A pure cosmological constant , the Milne model and a closed Universe .

Several decades later, the highest authority in physics—the experiment—put an end to all speculations. In the late 1990s Riess et al. (1998) observed ten supernovae of type Ia that they used as standard candles to infer their luminosity distance as a function of redshift and found that they were on average 10%–15% farther away than one would expected in an open, mass dominated Universe. They were able to rule out the case at the confidence level, leading the way for precision cosmology. Almost simultaneously, an independent measurement by Perlmutter et al. (1998) confirmed their findings by analyzing 42 high–redshift supernovae of type Ia. Thus, the notion that the expansion of the Universe was accelerating was confirmed. Our failure to understand the nature of this accelerated expansion is reflected in the name for this phenomenon: dark energy. It is important to stress that the underlying physical cause of the acceleration does not have to be a new form of energy, it might as well be a new form of physics that mimics the effects of a cosmological constant, which is often misunderstood even by physicists unfamiliar with cosmology. Whether we are faced with a “physical darkness or dark physics”, to say it with the words of Huterer and Linder (2007), is the whole crux of the matter. Evidence independent of the distance-luminosity relation for type 1a supernovae which supports the idea of acceleration has steadily increased since the findings of Riess and Perlmutter. For instance, the late integrated Sachs-Wolfe effect (Scranton et al., 2003), or the cosmic microwave background radiation (Komatsu et al., 2010) and a dark energy component is part of today’s standard model of the Universe. But does this mean that cosmology as a science is done?

On the contrary. Even though the so called flat CDM model (cold dark matter with a cosmological constant) with only six free parameters (,,,,,) 3 agrees remarkably well with all observations, most physicists are still not satisfied with it for a number of reasons. First of all, the value of the cosmological constant is incredibly small. When we express in natural units, i.e. where the Planck length , we get . Even worse, when we interpret the cosmological constant simply as the vacuum energy predicted from quantum field theory, the value diverges. Because it is widely believed that new physics are needed at the Planck scale where neither quantum effects nor gravity are negligible, namely a theory of quantum gravity (Carroll et al., 1992), one can renormalize the integral over all modes of a scalar, massive field (e.g. a spinless boson) with zero point energy by introducing a cutoff at Planck length. This generates a finite value, but one that is still 120 orders of magnitude too large, making it arguably the worst prediction in the history of physics. Finding a mechanism that would cause fields to cancel out each other’s contributions to the vacuum energy density would be a challenge, but a mechanism that leaves a tiny, positive value seems completely out of reach for now. The most promising resolution might lie in super symmetry, super gravity or super string theory. This is commonly referred to as the cosmological constant problem (Weinberg, 1989).

The second problem is the coincidence problem, which describes the curious fact that we live exactly in a comparably short era in which matter surrenders its dominance to dark energy. This can be visualized if we consider the time dependence of the dimensionless dark energy density4:


because scales as while stays constant, when is the scale factor today, usually normalized to unity. Plotting this with its derivative on a logarithmic scale (see fig. 2.2) shows the coincidence.

Figure 2.2: Visualization of the coincidence problem. Shown here is the dimensionless dark energy density in dependence of the scale factor and its derivative on a logarithmic scale. Highlighted with dashed lines are the times of big bang nucleosynthesis (BBN) at , recombination at and today at . Dark energy started to dominate right when structures emerged in the Universe.

Until now, there is no satisfactory solution to these problems. String theorists argue that we might live in one of realizations of the Universe, corresponding to the number of false vacua that are allowed by string theory (Douglas, 2003). This approach comes down to an anthropic argument, stating that it is no surprise that the cosmological constant has the value that it has, since any other value might be realized in a different Universe, but would not lead to the growth of structure required for sentient life that can ponder the value of natural constants (Susskind, 2005). This argument is controversial amongst physicists. As one of the leading cosmologists Paul Steinhard puts it in a Nature article (Brumfiel, 2007): “Anthropics and randomness don’t explain anything.”

Alternative theories include the postulation of Quintessence, a new kind of energy in the form of a scalar field with tracker solutions mimicking a cosmological constant (Zlatev et al., 1999), or a modification of Einstein’s theory of general relativity, where, in the most popular case, the Ricci scalar in general relativity is replaced as the Lagrangian by some function (De Felice and Tsujikawa, 2010). Unfortunately, current observational constraints are insufficient when it comes to discriminating between competing theories.

2.2 Weak lensing

When light passes through a gravitational potential, its trajectory is bent in a way described by general relativity. This process, dubbed “lensing”, can alter the shape, size and brightness of the original image. We usually distinguish between strong, weak and micro lensing. The mechanism of micro lensing relies on small objects of low magnitude being the lens, such as brown or white dwarfs, neutron stars, planets and so on, that transit a bright source and temporarily increase the source’s brightness on time scales of several seconds to several years (Paczynski, 1996). The other two cases are static on those time scales, since more massive lenses like galaxies or clusters are involved, which means that the distances involved are several orders of magnitude larger. Strong lensing occurs when multiple images of the same galaxy from behind a super massive object can be observed, as seen in the bottom left corner of the left panel in fig. 2.3.

Figure 2.3: Left panel: Simulation of a gravitational lens with a case of strong lensing in the lower left corner at an average redshift of one. Right panel: Enhancement of the upper right section of the left panel, depicting lensing in the weak regime. The contours are ellipses derived from the quadrupole moment of each galaxy image. The two bars in the upper right corner show the average shear, where the lower one represents the expected shear and the upper one the actual shear. The deviation stems from shot noise due to the intrinsic ellipticity and random orientation of the galaxies. Taken from Mellier (1998).

While strong and micro lensing are relatively rare, weak lensing is an effect that can be observed over large areas of the sky (see upper right corner of the right panel in fig. 2.3). Its power lies in statistical analysis, since weakly lensed objects are only slightly distorted and impossible to detect individually. Only the average distortion field reveals the cosmological information that we are looking for. As it turns out, weak lensing changes the ellipticity of galaxies (in a first order approximation), but the intrinsic ellipticity always dominates the shape. We need to gather large enough samples and then subtract the noise, which is relatively simple if the magnitude and direction of the intrinsic ellipticity is uncorrelated to the signal. This is unfortunately not entirely the case due to tidal effects (intrinsic alignment), and this is only one of the many challenges that weak lensing harbors. The most obvious one is illustrated in fig. 2.4. Others include photometric redshift errors, calibration errors and uncertainties in power spectrum theory. A lot of these systematic errors can be accounted for by introducing several nuisance parameters (Bernstein, 2009). The trade-off is that a high number of nuisance parameters diminish the merit of the Fisher matrix formalism, as there are degeneracies to be expected, so we will only account for photometric redshift error and the intrinsic ellipticity in this thesis.

a) b) c) d) e)

Figure 2.4: A schematic illustration of the process of weak lensing detection, demonstrating its difficulty. a) The original source galaxy. b) Weak lensing distorts the shape and adds a shear to the image. c) The image is convolved by the telescopes point spread function and, in case of ground based surveys, the atmosphere. d) The effect of the finite resolution of the detector. e) Additional noise is applied to the image. The challenge now is to infer b) from e). (Image of M31 by courtesy of Robert Gendler.)

An expression describing the shear can be obtained by perturbing the Friedmann-Lemaître-Robertson-Walker metric in the Newtonian gauge


where and are scalar gravitational potentials. We can then solve the geodesic equation for a light ray


which can be rewritten in our case as


Thus, we obtain the lensing equation


with the lensing potential . Here is the radial comoving coordinate, is the angle of the light ray with respect to the -axis, and are displacement coordinates perpendicular to the -axis.


Lens plane

Source plane

Figure 2.5: Description of the lensing configuration. Note that comoving distances cannot be added trivially, i.e. .

The distortion of a source image at distance to first order is a linear transformation described by the symmetric matrix




Here we introduced the convergence


which is a measure for the magnification of the image, and the shear


which describes the distortion. These quantities will become important when we want to describe the cosmic shear by statistical means, in particular its power spectrum.

We can measure the ellipticity of real astronomical images by computing their quadrupole moment, which is defined as


where is the luminous intensity of the galaxy image with its center at . The complex ellipticity is then given by


It is straight forward to show that according to this definition the complex ellipticity for an ellipse with semiaxes and , rotated by an angle , is


To first order, a weak lens distorts a spherical object into an ellipse with a simple relation between the shear and the complex ellipticity, namely


We can use this to compute the power spectra , which describe the auto-correlation of the shear field at the multipole , the adjoint variable to . It can be shown that the convergence power spectrum is related to the matter power spectrum and can be written to first order as a linear combination . Another linear combination, , must vanish. These are usually referred to as the electric and magnetic part of the shear field. Thus, the convergence power spectrum contains all the information while the magnetic part provides a good check for systematics.

But, as mentioned before, not all galaxies appear circular even if their intrinsic shape was a perfectly circular disc and there was no gravitational lensing distorting our view of the sky, as they are randomly oriented and thus, viewed from the side, look like ellipses. This intrinsic ellipticity, denoted by , can be reflected in a noise term that is added to the power spectrum, if we assume that the noise is uncorrelated to the weak lensing signal5:


where is the average galaxy density (A&T, ch. 14.4).

The shear field is a vector field of dimension two. To access the full power of weak lensing, we must also include the third dimension, which can be done by considering the redshift of each source galaxy. Weak lensing works linearly, so we can add up the transformation matrices of all galaxies and find for the full transformation matrix


where is the number of galaxies in a shell with width and the galaxy distribution function is normalized to unity. Making use of the fact that for any real, integrable functions and the identity


holds, we can rewrite eq. (2.22) to get


Here we abbreviated the weight function as


Thus, the total convergence field from eq. (2.14) now reads


By grouping all observed galaxies into redshift bins, we are able to extract more information out of the given data by calculating the power spectrum in each each redshift bin as well as their cross correlation. This is called power spectrum tomography (Hu, 1999). We shall cover this in more detail in section 3.1.

2.3 Power spectra

Almost every book and every lecture on modern cosmology begins with the cosmological principle: the assumption that the Universe is, simply put, homogeneous and isotropic. Now obviously the average density, say, below the University Square in Heidelberg is quite different than the average density above, and it is still quite different at the midpoint between the Sun and Alpha Centauri – we say that the Universe has structure. But if we look at the Universe at sufficiently large scales of , the overall success of the standard model confirms our initial assumption (Komatsu et al., 2010). While there are research groups who investigate the possibility of genuine large scale inhomogeneities (Barausse et al., 2005; Kolb et al., 2005), we remain confident that there is good reason to believe that the cosmological principle applies to the Universe as a whole.

The matter distribution in tour Universe consists mostly of structures at different scales, which can be roughly classified as voids, super clusters, clusters, groups, galaxies, solar systems, stars, planets, moons, asteroids and so on. This “lumpiness” is inherently random, but like all randomness it can be characterized by an underlying set of well defined rules by using statistics. The matter distribution in our Universe is one sample drawn from an imaginary ensemble of Universes with an according probability distribution function (PDF), and we are now facing the challenge of determining that PDF.

We require our models to make precise predictions regarding the structure of matter. Naturally, no model will be able to predict the distance between the Milky Way and Andromeda for instance, but it should be able to predict how likely it is for two objects of that size to have the separation that they have. To quantify and measure the statistics of the irregularities in the matter distribution in the cosmos, we define the two-point correlation function as the average of the relative excess number of pairs found at a given distance. That is, if denotes the number of point-like objects (nucleons, or galaxies, or galaxy clusters) found in some volume element at position with a sample average , we can write the average number of pairs found in and as


where is the mean number density. If the matter distribution was truly random, then any two volume elements would be uncorrelated, which means that would vanish and the average number of pairs would simply be the product of the average amount of objects in each volume element. If is positive (negative), then we say those two volume elements are correlated (anti-correlated).

Assuming a statistically homogeneous Universe, can only depend on the difference vector . Further assuming statistical isotropy allows to only depend on the distance between and . Hence, we denote the two-point correlation function as . Solving eq. (2.27) leads to


which can be easily checked by plugging in the definition of the density contrast


Thus, the correlation function is often written as the average over all possible positions


Another important tool that will later prove to be invaluable is the power spectrum, which is in a cosmological context the square of the Fourier transform of a perturbation variable (up to some normalization constant). For instance, the matter power spectrum is defined as


Rewriting the norm yields


or, if we substitute


where we used eq. (2.30) in the last step. Hence the power spectrum is the Fourier transform of the two-point correlation function.6

The power spectrum is amongst the cosmologist’s favorite tool to describe our Universe in a meaningful way. It is often applied for instance to the matter distribution, the anisotropies of the cosmic microwave background radiation or the shear field of weak lensing.

The convergence power spectrum describing the cosmic shear of galaxy images can be expressed in terms of the matter power spectrum. Since the matter distribution is 3-dimensional, but the convergence is a function of the 2-dimensional sky, we need Limber’s theorem to relate those two. It states that the power spectrum for a projection


where is a weight function (normalized to unity), turns out to be


where is the power spectrum of . This theorem is directly applicable to the total convergence field in eq. (2.26), so that its power spectrum becomes




being the window function. All we need to know now is the power spectrum of , which is simply


because the Fourier transform of is . Then we can plug in the Poisson equation in Fourier space, which is


(in the absence of anisotropic stress, i.e. ) to obtain


Putting it all together, and replacing with the multipole , finally yields the power spectrum for the convergence,


More details can be found in A&T (ch. 4.11, 14.4) or Hu and Jain (2004).

In fig. 2.6 we can see the matter power spectrum derived from linear perturbation theory as well as non-linear corrections. Fig. 2.6 shows the convergence power spectrum, derived from the linear and non-linear matter power spectrum according to eq. (2.42).

Figure 2.6: Top panel: Comparison between a matter power spectrum with (purple) and without (blue) non-linear corrections in arbitrary units. The fitting formulae were taken from Eisenstein and Hu (1999) and Smith et al. (2003). Non-linear corrections become important only at wave numbers . Note that is not in units of in this case. Bottom panel: Comparison between a convergence power spectrum from eq. (2.42) based on the linear and non-linear matter power spectrum (no redshift binning).

2.4 Fisher matrix formalism

The last tool we will need is the powerful theory of Bayesian statistics. In physics we define a theory by a set of parameters encapsulated in the vector . The ultimate goal is to deduce the value of by performing a series of experiments that yield the data . If we were given the values of , we could calculate the probability density function and thus predict the probability of getting a realization of the data given the theory , which is usually denoted by . But since we rarely are given the exact theory, all we can do is draw samples of an unknown PDF by conducting experiments in order to estimate the parameters . In a frequentist approach, we would take those parameters as the true ones that maximize the so-called likelihood function


which is nothing but the joined PDF, with now being interpreted as a variable and as a fixed parameter. But in cosmology, we often have prior knowledge about the parameters from other observations or theories that we would like to account for. For this, we need Bayes’ theorem:


In this Bayesian approach, we turn the situation around so that we are asking the question: What is the probability of the theory, given the observed data? Note that we included the background information in every term even though it is hardly relevant to the derivation. It is to remind ourselves that we always assume some sort of prior knowledge, even when we do not realize it. For instance, we might implicitly assume that the Copernican principle holds true, or that the Universe is described by the Einstein field equations.

Here the prior knowledge (or just prior) is given by , and is the marginal probability of the data which acts as a normalization factor, requiring that . is identical to the likelihood, which is given by the model. This could be for instance a Gaussian PDF


The left hand side of eq. (2.44) is then called the posterior (Hobson et al., 2009, p. 42). We now have all the ingredients to calculate the posterior probability of the parameters given the data. This quantity is important for computing the so-called evidence, which is defined by the integral over the numerator in eq. (2.44) and needed for model selection.

Unfortunately, the likelihood is often not a simple Gaussian. Finding the values of that maximize the PDF given by eq. (2.44) can be computationally demanding, since a (naïve) algorithm for finding the extremum scales exponentially with the number of dimensions of the parameter space. This will becomes a problem when considering 30 parameters as we will later in this thesis. It is not unusual for Bayesian statistics to suffer from the “curse of dimensionality”(Bellman, 2003). One way to overcome this curse is by using Markov chain Monte Carlo methods. In the case of a likelihood that is not a multivariate Gaussian, we can simply assume that it can at least be approximated by one. This is not an unreasonable assumption, since the logarithm of the likelihood can always be Taylor expanded up to the second order around its maximum, denoted by :


No first order terms appear since they vanish in a maximum by definition. It is now quite useful to define the Fisher matrix as the negative of the Hessian of the likelihood,


This definition is not very useful when we need to find the maximum of a multi-dimensional function and then compute the derivatives numerically anyway, since many expensive evaluations is what we wanted to avoid in the first place. However, since this thesis deals with constraints by future surveys, using some fiducial model which we already know the expected outcome of and thus the peaks of the likelihood. This is the reason why the Fisher matrix is a perfect tool in assessing the merit of future cosmological surveys.

Let us consider a survey in which we measure some set of observables (along with their respective standard errors ) whose theoretical values depend on our model. These observables could be the same quantity at different redshifts, i.e. , or different quantities all together, e.g. , or a mixture of both. Assuming a Gaussian PDF we can write down the likelihood as


Using the definition in eq. (2.47) we get for the Fisher matrix


2.4.1 Calculation rules

We will not go into a full derivation of the rules, as they can be found in A&T (ch. 13.3) and are quite straight forward, but rather simply state them here.

Fixing a parameter. If we want to know what the Fisher matrix would be given that we knew one particular parameter precisely, we simply remove the -th row and column of the Fisher matrix.

Marginalizing over a parameter. If, on the other hand, we want to disregard a particular parameter , we remove -th row and column from the inverse of the Fisher matrix (the so-called correlation matrix) and invert again afterwards. If we are only interested in exactly one parameter , then we cross out all other rows and columns until the correlation matrix only has one entry left. Thus, we arrive at the important result


This implies that the Fisher matrix must be positive definite, as it must be as the Hessian in a maximum.

Combination of Fisher matrices. Including priors in our analysis is extremely simple in the Fisher matrix formalism, since all we need to do is add the Fisher matrix:


This only holds if both matrices have been calculated with the same underlying fiducial model, i.e. the maximum likelihood is the same. The only difficulty lies in ensuring that the parameters of the model belonging to each matrix are identical and line up properly. If one matrix covers additional parameters, then the other matrix must be extended with rows and columns of zeros accordingly.

Parameter transformation. Often a particular parameterization of a model is not unique and there exists a transformation of parameters


which might happen when combining Fisher matrices from different sources. Then the Fisher matrix transforms like a tensor, i.e.


where is the Jacobian matrix


which does not necessarily need to be a square matrix. If it is not, however, note that the new Fisher matrix will be degenerate and using it only makes sense when combining it with at least one matrix from a different source.

Chapter 3 Constraints on cosmological parameters by future dark energy surveys

With the knowledge of the observational specifications of future weak lensing surveys, we can use the Fisher matrix formalism to forecast the errors that said surveys will place on cosmological parameters. In particular, we want to expand the list of those parameters with values of the Hubble parameter and growth function at certain redshifts. An invaluable resource for this chapter is the book by A&T (ch. 14)

3.1 Power spectrum tomography

It has been shown that dividing up the distribution of lensed galaxies into redshift bins and measuring the convergence power spectrum in each of these bins as well as their cross-correlation can increase the amount of information extracted from weak lensing surveys (Hu, 1999; Huterer, 2002). This means on one hand that we need to have additional redshift information on the lensed galaxies, but on the other that we are possibly rewarded with gaining knowledge about the evolution of dark energy parameters. Note that the purpose of the redshift bins in this thesis is two-fold, and power spectrum tomography is only one of them. The other is the linear interpolation of and , where the centers of the redshift bins act as supporting points, making our analysis independent of assumptions about the growth function by any particular model. We choose to divide the redshift space into bins such that each redshift bin contains roughly the same amount of galaxies according to the galaxy density function , i.e.


for each . We infer the values of the via a series of successive numerical integrations with and (see fig. reffig:2-zbins). A common parametrization of is


with and . Here is related to the median redshift (Amara and Refregier, 2007).

Figure 3.1: The area under the curve of the galaxy density function is the same in each redshift bin for .

We need a galaxy density function for each bin, and the naïve choice would be


But to account for redshift measurement uncertainties, we will convolve with the probability distribution of the measured redshift given the real value :


Note that the integral vanishes if lies outside the th bin, so the limits can be adjusted to the according finite values. This convolution reflects the fact that we cannot assign a galaxy to a particular bin by measuring the photometric redshift with finite precision. The probability distribution is here modeled by Gaussian, i.e.


with (see Ma et al. (2006) for details). Finally, we normalize each function to unity, such that


The resulting functions can be visualized as in fig. 3.2.

Figure 3.2: Normalized galaxy density functions for each bin with , convolved with a Gaussian to account for photometric redshift measurement errors.

3.2 The matter power spectrum

Future weak lensing surveys will supply us with the convergence power spectrum in which the cosmological information we are seeking is imprinted. However, for now we will have to rely on simulated data. As we will see in section 3.3, the convergence power spectrum depends on the matter power spectrum which can be simulated by a fitting formula derived by Eisenstein and Hu (1999). Non-linear corrections need to be accounted for, since we are assuming an angular galaxy density of , which corresponds to a multipole up to a order of magnitude of


As we saw in fig. 2.6, non-linear corrections are required for , and for this we need to use the results by Smith et al. (2003). Here we will give a brief overview of the essential results found in the two papers cited above, using our notation and some simplifications (no massive neutrinos, flat cosmology).

3.2.1 Fitting formulas

It should be stressed that the growth function in Eisenstein’s paper differs from ours by a factor of , so . A precise definition of the growth function will be given in section. 3.6.2. They also define . Only in this section will we differentiate between the linear and the non-linear power spectrum. Later on, the expression “power spectrum” or will always imply that non-linear corrections have been applied.

Using the definition of the growth function, the matter power spectrum in the linear regime can be written as (cf. eq. (3.70))


where is called the scalar spectral index. The transfer function can now be fitted by a series of functions as follows.


with (we assume that there are no massive neutrinos, in which case equals unity)






Here, we used the abbreviations




By definition, the power spectrum needs to be normalized such that


where is the Fourier transform of the real-space window function, in this case a spherical top hat of radius (in Mpc), i.e.


3.2.2 Non-linear corrections

When applying non-linear corrections, we use the dimensionless power spectrum, which is defined by


and similarly for other indices. It turns out that the non-linear power spectrum can be decomposed into a quasi-linear term and contributions from self-correlations


which are given by


with and


In these equations, is defined via




while the effective index is


and the spectral curvature is


The best fit yielded the following values for the coefficients:


Also, the functions in a flat universe are given by


3.2.3 Parameterized post-Newtonian formalism

We shall allow another degree of freedom in our equations that stems from scalar theories in more than four dimensions. The Gauss-Bonnet theorem in differential geometry connects the Euler characteristic of a two-dimensional surface with the integral over its curvature. In general relativity, it gives rise to a term with unique properties: It is the most general term that, when added to the Einstein-Hilbert action in more than four dimensions, leaves the field equation a second order differential equation. In four dimensions, the equation of motion does not change at all under this generalization, unless we are working in the context of a scalar-tensor theory, in which the Gauss-Bonnet term couples to the scalar part and modifies the equation of motion. For our purposes, this will effectively make Newton’s gravitational constant a variable, denoted by (for details, see Amendola et al. (2006) and references therein). This is called the parameterized post-Newtonian (PPN) formalism.

By defining


one can write the Poisson equation in Fourier space as


where accounts for comoving density perturbations of matter. If we admit anisotropic stress, then the two scalar gravitational potentials do not satisfy , and we parameterize this by


With weak lensing, only the combination


appears in the convergence power spectrum in a very simple way, where we can just replace the matter power spectrum by . Obviously, in the standard CDM model, we have , so we allow it to vary in time and parameterize it as


such that initially it looks like the standard model and then gradually diverges from there (Amendola et al., 2008). Finally, we add to our list of cosmological parameters with a fiducial value of 0.

3.3 The Fisher matrix for the convergence power spectrum

Building on the methods outlined in chapter 2, we can now compute the weak lensing Fisher matrix. A full derivation of its expression can be found in A&T, ch. 14.4 or Hu (1999), with the final result being