# Monte Carlo models of dust coagulation

## Abstract

The thesis deals with the first stage of planet formation, namely dust coagulation from micron to millimeter sizes in circumstellar disks. For the first time, we collect and compile the recent laboratory experiments on dust aggregates into a collision model that can be implemented into dust coagulation models. We put this model into a Monte Carlo code that uses representative particles to simulate dust evolution. Simulations are performed using three different disk models in a local box (0D) located at 1 AU distance from the central star. We find that the dust evolution does not follow the previously assumed growth-fragmentation cycle, but growth is halted by bouncing before the fragmentation regime is reached. We call this the bouncing barrier which is an additional obstacle during the already complex formation process of planetesimals. The absence of the growth-fragmentation cycle and the halted growth has two important consequences for planet formation. 1) It is observed that disk atmospheres are dusty throughout their lifetime. Previous models concluded that the small, continuously produced fragments can keep the disk atmospheres dusty. We however show that small fragments are not produced because bouncing prevents fragmentation. 2) As particles do not reach the fragmentation barrier, their sizes are smaller compared to the sizes reached in previous dust models. Forming planetesimals from such tiny aggregates is a challenging task. We decided to investigate point 1) in more detail. A vertical column of a disk (1D) is modeled including the sedimentation of the particles. We find that already intermediate levels of turbulence can prevent particles settling to the midplane. We also find that, due to bouncing, the particle size distribution is narrow and homogenous as a function of height in the disk. This finding has important implications for observations. If it is reasonable to assume that the turbulence is constant as a function of height, the particles observed at the disk atmospheres have the same properties as the ones at the midplane.

Dissertation |

submitted to the |

Combined Faculties for the Natural Sciences and for Mathematics |

of the Ruperto-Carola University of Heidelberg (Germany) |

for the degree of |

Doctor of Natural Sciences |

presented by |

András Zsom |

born in Püspökladány (Hungary) |

Oral examination: 18, October, 2010 |

Abstract

Zusammenfassung \zusammenfassung

Diese Arbeit befasst sich mit der frühesten Phase der Planetenentstehung, nämlich der Koagulation von mikrometer- hin zu millimetergroßen Staubpartikeln in zirkumstellaren Scheiben. Als erste Studie dieser Art simulieren wir die Staubentwicklung in ‘representative particle’ Monte-Carlo-Simulationen unter Verwendung eines Kollisionsmodells, das die neuesten Laborexperimente berücksichtigt. Die Simulationen verwenden drei verschiedene Scheibenmodelle in einer lokalen Box (0D) in einem Abstand von 1 AU vom Zentralstern. Unsere Ergebnisse zeigen, dass die Staubentwicklung nicht dem bislang angenommenen Wachstums-Fragmentations-Zyklus folgt, sondern dass das Wachstum von abprallenden Stößen aufgehalten wird, bevor es das Fragmentationsregime erreicht. Wir bezeichnen dies als ‘bouncing barrier’, ein weiteres Hindernis im ohnehin schon komplexen Entstehungsprozess von Planetesimalen. Die Abwesenheit des Wachstums-Fragmentations-Zyklus und das unterbundene Teilchenwachstum haben zwei wichtige Konsequenzen für die Entstehung von Planeten: 1) Beobachtungen zeigen, dass die Atmosphären von Scheiben während ihrer gesamten Lebenszeit staubig sind. Bisherige Modelle folgerten dass kontinuierliche Fragmentation diese kleinen Staubteilchen produziert und dadurch die Scheibenatmosphäre “staubig” hält. Unsere Ergebnisse zeigen jedoch, dass kleine Fragmente gar nicht erst produziert werden, weil die Fragmentationsgrenze nicht erreicht wird. 2) Da Teilchen die Fragmentationsbarriere nicht erreichen, bleiben sie kleiner als in bisherigen Modellen. Die Entstehung von Planetesimalen aus solch kleinen Staubaggregaten ist eine herausforderungsvolle Aufgabe. Wir haben uns mit Punkt 1) näher befasst. Hierzu modellieren wir einen vertikalen Schnitt (1D) durch die Scheibe unter Berücksichtigung von Staubsedimentation. Unsere Ergebnisse zeigen, dass schon eine moderat ausgeprgte Turbulenz die Sedimentation zur Mittelebene unterbinden kann. Des Weiteren fanden wir heraus, dass die Verteilung der Teilchengröße schmal und eine homogene Funktion der Höhe über der Mittelebene ist. Dies hat wichtige Auswirkungen für Beobachtungen: Unter der Annahme, dass die Turbulenz höhenunabhängig ist, haben die in der Scheibenatmosphäre beobachteten Teilchen dieselben Eigenschaften wie diejenigen in der Mittelebene.

1.3

###### Contents:

- \thechapter Introduction
- \thechapter A representative particle approach to the coagulation of dust particles
- A Introduction
- B The method
- C Discussion of the method
- D Standard tests and results
- E Applications to protoplanetary disks
- F Conclusions and outlook
- \thechapter Mapping the zoo of laboratory experiments
- G Introduction
- H Collision experiments with relevance to planetesimal formation
- I Classification of the laboratory experiments
- J Collision regimes
- K Porosity evolution of the aggregates
- L Discussion
- \thechapter Introducing the bouncing barrier
- M Introduction
- N The nebulae model
- O Collision model and implementation
- P Results
- Q Discussion
- R Summary
- \thechapter Sedimentation driven coagulation within the snow line
- S Introduction
- T Numerical method
- U Results
- V Discussion and future work
- W Summary
- \thechapter Conclusions and Outlook

1.2

##
Chapter \thechapter Introduction ^{1}

^{1}

Perhaps the most important astronomical discovery of the second half of the last century was the detection of planets orbiting around nearby sun-like stars. This discovery and the since then vastly growing number of exoplanet discoveries further trigger our interest in their formation, their atmospheres, chemical and geophysical evolution, etc.

The theory of planet formation needs to successfully explain all the different types of exoplanets ranging from the Hot-Jupiters, which revolve around their star in a period of days, the presumably rocky Super-Earth planets, and giant planets orbiting more than 100 AU distance from the central star. This variety and the complex processes of planet formation make the field fascinating.

The formation of stars starts with the gravitational collapse of a dense molecular cloud. Due to the initial angular momentum of the cloud, most of the infalling matter will not fall directly onto the protostar, but it forms a disk around it. The matter in the dense cloud is twofold: roughly 99% (in mass) of the matter is present in the form of gases, the rest consists of solid, sub-micron sized dust particles. This solid matter serves as the building blocks of planets. We see that the way from the sub-micron sized particles ( g) to a planet of several 1000’s of kilometers in radius ( g) is a long one (see Sec. 1 for more details). We do not attempt to go the whole way in this thesis, but rather concentrate on the initial stages of growth until millimeter-centimeter (1 g) in size.

To understand planet formation and dust evolution, first we need to know the typical physical conditions of the environment where it happens, i.e., the properties of protoplanetary disks. Our solar system provides only vague clues regarding its formation history, as the solar nebula has long dissipated: the distribution and properties of the planets, minor bodies, satellites and meteorites all contribute to our understanding. These results are reviewed in Sec. 2.1. Observations of existing protoplanetary disks in all wavelengths from UV to millimeter revealed the basic properties of planet-forming disks such as their typical masses, lifetimes, physical conditions (density and temperature structure), important processes such as accretion, photoevaporation, etc. We review these processes in Sec. 2.2.

Once the typical physical properties of the disk are known, we need to understand the driving force of particle growth. The solid particles in the gas disk feel the drag from the gas, and have relative velocities. As a rule of thumb, we can say that the relative velocity is increasing with mass until the particles reach a meter in size. Due to the (random and systematic) relative velocities, the particles can collide and grow, therefore it is crucial to understand what sources of relative velocity are there. One should also keep in mind that low velocity collisions produce self-similar so called fractal structures, while intermediate and high velocities lead to deformation (restructure and/or destroy) of the aggregates. Such deformation changes the aerodynamical properties of the particles and thus in turn the relative velocity (see Chapters N.2 and T.3).

We have to know the outcome of each collision: will the aggregates stick, bounce or fragment? During the last fifteen years, a huge amount of laboratory data was collected about the collisions of dust aggregates. We know that the monomers (sub-micron sized particles) stick together due to surface forces (van der Waals attraction for silicates and molecular dipole interaction for ices). We can measure the strength of the surface forces, furthermore it is possible to produce fractal aggregates and perform experiments with porous, but non-fractal structures in a wide velocity range (from mm/s to several 10’s of m/s, see Sec. 3).

The laboratory experiments are crucial for our understanding of the microphysics of collisions as we cannot gain information about it in any other way. The ‘molecular dynamics’ models can also be used to simulate collisions of aggregates, however these models also rely on measured quantities, such as surface energy, Young’s modulus, etc. The difficulty with the experiments is that they produce results which are not always easily usable in theoretical modeling, as the results are rather complex. One of the main goals of this thesis is to collect all the available laboratory data and compile it into a collision model (Chapter \thechapter).

Once we know what happens during individual collisions, we have to calculate how the entire population of dust particles evolves. Traditional methods integrate the Smoluchowski equation with only one particle property: the particle mass (the equation, and such solvers are introduced and described in Chapter D). Although such methods are fast, thus the whole disk can be modeled, they have difficulty including any additional dust property. Therefore we use a Monte Carlo method (described in Chapter \thechapter), which is flexible and can follow the porosity of the aggregates as well (which is a dust-parameter in the experiment-based collision model), and it is straightforward to include a collision model with arbitrary complexity. The price we pay for this flexibility is the speed of the code. We can only follow dust evolution in a local box (see Chapter \thechapter) or in a 1D vertical column (Chapter \thechapter) so far.

One should also realize how entangled dust coagulation is with the other ongoing processes in disks. A truly self-consistent model should take into account the gas and dust evolution, how the surfaces of dust grains influence the chemistry and ionization fraction of the gas, how this ionization fraction regulates the coupling of the gas with the magnetic fields, how this coupling determines the strength of turbulence, and finally closing the loop, how the strength of turbulence determines the dust evolution. The opacity of dust influences the observed properties of disks, the dust also sets the temperature of the gas at the main body of the disk (except at the low density upper layers), and the dust particles (if sufficiently concentrated) can influence the gas dynamics as well. Although such a complex model, which takes all these processes self-consistently into account, does not exist yet, the effects of the individual processes are at least approximately known (see Sec. 4).

### 1 Planet formation in a nutshell

Star formation mostly happens in star clusters in which roughly half of the stars are part of a binary or small multiple systems. The formation of an isolated star is easier to understand (no initial fragmentation of the parent cloud onto multiple objects, no environmental effects, such as stellar encounters and strong external radiation, have to be treated), therefore isolated star formation is better-studied and understood than star formation in an active environment.

Star formation starts with the collapse of the parent cloud. This cloud rotates and has too much angular momentum to collapse directly onto the protostar, therefore a disk forms around the central object. It is observed that matter is accreted from the disk onto the central star (Hartmann et al., 1998). To accrete, angular momentum has to be redistributed within the disk (viscous spreading – Shakura & Sunyaev (1973); Lynden-Bell & Pringle (1974)), or angular momentum has to be lost (e.g., by disk winds – Blandford & Payne (1982); Lubow et al. (1994)). These processes happen on a much longer timescale than the orbital period, therefore in many applications it is a reasonable assumption that disks are in quasi-equilibrium. How angular momentum is lost/redistributed is one of the most actively studied area of star formation and the physics of disks (see Sec. 4 for more details).

The parent cloud also contains dust particles of sub-micrometer in size (Mathis et al., 1977). It is usually assumed that the dust-to-gas ratio in the interstellar matter is 1:100 by mass. This ratio is also inherited by the disk and this solid material (iron, silicates and ices) is the ingredient of terrestrial planets and the cores of some giant planets.

Currently there are two leading giant planet formation models: disk instability (Boss, 1997) and core accretion. These models are often thought of being competitive to each other, but one can imagine that these mechanisms operate at different parts of the disk simultaneously.

The core accretion model (Mizuno, 1980; Pollack et al., 1996) starts with the assembly of the protoplanetary core. This process converts the sub-micron sized dust particles into a core which has some thousands of kilometers in size. The assembly of the core happens in three steps. 1.) The dust particles grow by two-body interactions where the gravitational interaction between the bodies is negligible, in other words dust coagulation. Coagulation can easily produce aggregates of millimeter in size. 2.) How these aggregates are exactly converted into planetesimals is one of the key problems in planet formation. This step bears from many problems (radial drift barrier, fragmentation barrier, bouncing barrier), which are discussed in detail in the upcoming chapters of this thesis. Planetesimals could form via coagulation if the environment is quiet, meaning that turbulence and radial drift are low (Brauer et al., 2008b). Or millimeter-decimeter sized particles could be concentrated in turbulent eddies and/or in vortices, where these concentrations are further enhanced by self-gravity of the particles. These particles could directly coalesce into kilometer sized planetesimals via many-body interactions (see the discussion in Chapter Q.3). Such a process cannot be modeled by the methods of this thesis, as we can only follow two-body interactions. 3.) Once planetesimals form, these planetesimals grow further due to gravitational agglomeration.

Once the core is formed, and gas is still present in the disk, an atmosphere is gathered around the core. Initially the envelope around the core is in hydrostatic equilibrium. The energy gained by the impacting planetesimals and the gravitational potential energy released due to the contraction of the atmosphere is in equilibrium with the energy loss due to convection and radiative diffusion. When, however, a critical mass is exceeded, runaway gas accretion starts. This runaway accretion ends when no gas is present around the planet due to the presence of a gap or disk dissipation. After this stage, the planet goes through a Kelvin-Helmholtz contraction.

The timescales involved are a serious uncertainty of the core accretion model. A typical disk lifetime is yrs (e.g., Haisch et al. (2001) and Sicilia-Aguilar et al. (2006)) during which the core has to be assembled and sufficient amount of gas has to be accreted.

Once the planets are formed, they can migrate in the gas disk (Masset, 2008; Papaloizou et al., 2007) and interact gravitationally due to resonances (Lecar et al., 2001). Once the gas disk is dispersed e.g., by photo-evaporation (due to external radiation – Adams et al. (2004), and due to radiation from the central star – Alexander et al. (2006); Gorti et al. (2009)), a so-called debris disk can still be present around the young star.

### 2 Observations of planet-forming systems

The planetary system that we can examine in the greatest detail is our solar system. We know the positions and physical properties of the eight planets, we are able to observe the asteroid belt, minor bodies, and satellites in the solar system, furthermore we can directly examine meteorites in the laboratory, determine their absolute and relative ages. All this information contains hints about the formation history of the solar system.

We can also observe other planet-forming systems in various wavelengths ranging from millimeter to UV, and we can use molecular spectral lines to trace the different molecular species. These observations put constraints on the physical properties of disks.

#### 2.1 The solar system

We do not know the exact mass of the dust and gas from which the planets in the solar system formed. However, it is possible to derive a lower mass limit using the masses and positions of the solar system planets. This nebula model is called the minimum mass solar nebula model (MMSN) introduced by Weidenschilling (1977). In the model, the masses of heavy elements of the planets are mixed with hydrogen and helium to reach solar compositions. The solar system is divided into concentric annuli each centered around the planet and extending half way until the next planet. The matter is then spread homogeneously in these annuli to obtain the gas surface density at the location of each planet. Performing these steps we get that the surface density scales as . The most commonly used normalization is (Hayashi, 1981)

(1) |

This is a lower limit of the solar nebula because it assumes that all the solids presented in the disk were incorporated into the planets. The model also assumes that the planets were formed in their current location, which might not be true due to the effects of migration. There are attempts to update this model by taking into account the migration of planets. Such calculations by Desch (2007) found that , although this model is debated in the literature. However, based on millimeter observations of protoplanetary disks, Andrews & Williams (2007) found that . Their findings are probably more representative at the outer parts of the disks ( AU) and not at the inner parts of the disks. We conclude that the mass distribution in the early solar system is still very much debated.

The regular planetary satellites of Jupiter (the Galilean satellites) have similar masses, tight prograde orbits which lie close to the equatorial plane of the planet, and they are trapped in mean motion resonances. This suggests that these satellites formed in a disk which surrounded the planet similarly to the primordial solar nebula surrounding the early sun. On the other hand, Saturn has only one big regular satellite, Titan. The differences of the Jovian and Saturnian systems can be explained by the quick truncation of infalling matter in the Jovian system, which is caused by Jupiter opening a gap in the solar nebula. Due to the lower mass of Saturn, the Saturnian disk had a longer lifetime, therefore one single big moon could assemble (Sasaki et al., 2010).

Other irregular satellites orbit in a larger distance from the host planet, sometimes in retrograde orbits. Such satellites were probably captured by the planet.

The solar system also contains many minor bodies. In the inner solar system the asteroid belt is prominent. The distribution of the semi major axes of the asteroid belt objects is not homogenous, instead, gaps can be observed (Kirkwood gaps) and the asteroid belt can be divided into three regions: inner belt (distance 2.5 AU – 3:1 resonance with Jupiter), central belt (distance between 2.5 and 3.81 AU – 2:1 resonance), and outer belt (beyond 3.81 AU). These asteroids were heavily perturbed by Jupiter therefore they could not assemble into a planet (Petit et al., 2001). Their mass is also greatly depleted compared to the primordial mass (Weidenschilling, 1977), and the different asteroid types are radially mixed. It seems that in order to explain all these properties of the asteroid belt, a combination of sweeping secular resonances from the migrating Jupiter and Saturn, and embedded planetary embryos are needed that excite and scatter one another (O’Brien et al., 2007).

Perhaps more interesting in the context of the primordial solar nebula are the Kuiper Belt Objects in the outer solar system (KBO – for a review see Luu & Jewitt (2002)). These objects have several classes like the classical KBOs (with low eccentricity), resonant KBOs (in 4:3, 3:2, 2:1 resonance with Neptune – intermediate eccentricity), and the scattered KBOs (with eccentricities around 0.5). As we see, the Kuiper belt is dynamically excited by Neptune. Two other properties of the Kuiper belt are important. 1.) The Kuiper belt has a mass of 0.1 Earth mass, which is surprisingly low. Accretion models predict that a mass of 10 Earth mass must have existed to explain the growth of the objects we see now (see e.g. Kenyon & Luu (1998)). 2.) The Kuiper belts ends near 50 AU. Gomes et al. (2004) examined two scenarios which could explain the low total mass of KBOs: the vast majority of KBOs crossed orbits with Neptune, therefore their orbits were scattered; fragmentation into dust removed most of the mass of the Kuiper belt. They concluded that none of these two scenarios are likely, instead they proposed that the protoplanetary disk possessed an edge about 30 AU. This edge is responsible for stopping the outward migration of Neptune, and during this migration, Neptune could have pushed the KBOs outwards (Levison et al., 2008).

We have the unique possibility to measure the ages of solar system rocks accurately with the means of radioactive dating. Although the accuracy of dating in the solar system is unmatched with any astronomical observations, one must keep in mind that this accuracy is achieved for a single (and rather special) system. One can calculate absolute and relative ages of meteorites using long-lived and short-lived radionuclides respectively (for a review see Russell et al. (2006); Wadhwa et al. (2007)).

Primitive meteorites (chondrites) were never differentiated, they are relatively unaltered since their time of formation, therefore they preserve the early history of the solar system. Most primitive meteorites contain small inclusions which were heated to high temperatures, during which the amorphous dust became crystallized. Spectral signatures of crystalline dust in protoplanetary disks are common, such crystallization process was observed ‘in action’ after an outburst of a Sun-like young star, EXLupi (Ábrahám et al., 2009).

Chondrules are the most abundant type of inclusions, more than 70% of the volume of primitive meteorites are chondrules. CAIs (calcium aluminum inclusions) are rarer, but they were subject to a more extreme heating event. The formation, specifically the heating mechanism for chondrules and CAIs is the subject of active research (Connolly et al., 2006). The three main hypotheses are: 1.) heating near the young Sun (X-wind model) with strong outward transport; 2.) shock waves in the gas disk; 3.) collisions between planetesimals and/or protoplanets.

CAIs are the oldest objects in the solar system with 4567.11 0.16 Myr as determined from Pb-Pb dating (Russell et al., 2006). This age pinpoints a specific event, namely the solidification of CAIs. One has to keep in mind that other events, like the collapse of the Sun’s parent cloud happened earlier, but such events cannot be dated precisely. The absolute age determination only recently became accurate enough to reliably determine the time interval between the formation of CAIs and chondrules. Such measurements suggest that some Myr passed between the formation of the oldest CAIs and the formation of chondrules. Absolute age determination methods assume that the abundance of parent and daughter isotopes is only altered via radioactive decay.

The relative ages between chondrules and CAIs can be determined using another method using short-lived radionuclides such as Al. The method also assumes that the abundance of parent and daughter isotopes is altered only via radioactive decay. Furthermore, it is also assumed that Al was uniformly distributed in the disk (isotopic equilibration), otherwise the differences in the original Al/Al ratio can indicate different local formation environments. The Al method also yields to a relative age of 1-2 Myr between the CAIs and chondrules.

This age spread fits within the observed lifetime of the disk, and (as the chondrules and CAI’s coexist in meteorites) it suggests that the formation of planetesimals were either delayed or ongoing for several Myr. A rapid planetesimal formation in less than 1 Myr is not supported by meteoritic data (Russell et al., 2006).

#### 2.2 Observations of protoplanetary systems

Most of the information about protoplanetary disks is obtained through infrared (IR) measurements. Disks have an IR excess due to the presence of dust particles. These particles can scatter or absorb the radiation of the central star, they also reemit thermal radiation in the IR. Near-IR (NIR) wavelengths map the warm dust close to the star (order of 1 AU or smaller), which originates from the upper layers of the disk. Far-IR (FIR) radiation maps colder dust further away from the star. The spectral energy distribution (SED) can be used to fit the disk parameters (such as disk geometry, dust composition, temperature and density structure – for a 2D radiative transfer model see Dullemond & Dominik (2004), a review about disk modeling can be found in Dullemond et al. (2007)).

The IR excess of disks can be used to determine the typical lifetime of disks (e.g., Haisch et al. (2001) and Sicilia-Aguilar et al. (2006)). One can measure the disk fraction of young stars (e.g., how many percentage of stars have IR excess) in different young clusters and correlate this disk fraction with the estimated age of the cluster. Following this procedure it was concluded that only half of the young stars have disks in a cluster of 2 Myrs age.

A prominent feature of the IR spectra is the 10 micron feature, which originates from small silicate particles (order of sub-micron in size) at the upper layers of disks. The shape of the 10 micron feature contains valuable information about the composition of these grains (van Boekel et al., 2006), although some caution is required when interpreting the data (Juhász et al., 2009). The most interesting aspect of the 10 micron feature for the topic of this thesis is that it can provide evidence for grain growth (van Boekel et al., 2003). Some sources have a flatter and broader spectral feature which suggests that in these sources the particles grew to a few micron in size at the upper layers. Theoretical models including particle settling, coagulation, and radiative transfer cannot yet convincingly reproduce this aspect of the 10 micron feature (Dullemond & Dominik, 2008). Sedimentation driven coagulation is the topic of Chapter \thechapter, where these physical processes are discussed in detail.

Optical and NIR scattered light images contain lot of information about disks. The used observational technique differs for different disk inclinations: coronagraphic measurements are used for high and intermediate inclinations; observing optically thick edge-on disks require high spatial resolution, but not high contrast, therefore adaptive optics systems are advantageous; the Orion nebula provides a unique opportunity to observe silhouette disks (the disks appear dark due to the bright background of the nebula). The information that these images provide also depends on the inclination. From face-on disks we can determine the ellipticity of the disk, the dependence of the surface brightness on the radius, and the ratio of scattered and unscattered light (the relative brightness of the disk and the star). From intermediate inclinations (when the central star is still visible), one can determined the relative brightness of the disk and the star, and the outer radius of the disk (if both the upper and lower parts of the nebulae are visible). In case of the edge on disks the inclination can be very precisely determined, and the effective scale-height of dust in the outer disk. A detailed discussion of scattered light images can be found in Watson et al. (2007).

Millimeter observations are a useful tool to estimate the mass of the solid material in the disk and these observations provide evidence for grain growth at the outer disk. At millimeter and sub-millimeter wavelengths (assuming the Rayleigh-Jeans limit and optically thin material), the observed flux is

(2) |

where and is the opacity and the temperature of the dust, is the mass of the dust in the disk. If can be obtained otherwise, and is known, the mass of the dust disk can be calculated. If the opacity has a power-law dependence (), then the flux is with . Using multi-wavelength measurements, we can obtain . The parameter in the interstellar matter is , but mm-observations of disks (most recently by Ricci et al. (2010)) show that is smaller than the ISM value, it is around 0.5–1. There can be several reasons why the parameter is reduced in disks (Draine, 2006). Some of the emission might come from optically thick regions, therefore the assumption used in deriving Eq. 2 is not valid. The chemical composition of dust in the disks can be very different from that of the ISM dust. Grain growth can change the size distribution of particles (Birnstiel et al., 2010). The opacity model might not be correct, e.g., the opacity of fractal structures can be quite different from non-fractal, spherical particles.

Millimeter, sub-mm observations, in a similar way as IR observations, can be used to infer disk lifetimes. Using these measurements, Andrews & Williams (2005) obtained the same disk lifetime as from IR measurements.

UV excess and magnetospheric emission lines can be used to determine accretion rates of disks (Calvet et al., 2000). The typical accretion rate for young stars is M yr; for stars of age 1 Myr, it is M yr. As the accretion rate is decreasing in time, one can infer the disk lifetime based on UV observations to be also a few yrs (Calvet et al., 2000).

These results of disk lifetime are compelling. The IR excess measures the ‘survival time’ of the small dust grains around 1 AU, the sub-mm measurements are based on the presence of dust at large radii, while the accretion signature means that gas can be transported onto the surface of the star directly from the gas disk. These three methods estimate the disk lifetime measuring entirely different disk material, still they obtain the same characteristic timescale. These observations imply that the disk dispersion happens across a wide range of radii in a relatively short time.

Line emissions of molecular species provide a unique way to determine the physical conditions in disks. Using CO observations, the radial and vertical temperature profile of disks can be determined. The observation of different CO isotopologues revealed that the outer disk is smaller for CO and CO than CO, which suggest that photodissociation is present at the outer disk. The condensation of CO onto grain surfaces can also be observed. These results and the prospects of the Atacama Large Millimeter Array (ALMA) are summarized in Guilloteau & Dutrey (2008).

### 3 Dust experiments in the laboratory

An extensive overview of the available laboratory experiments is given in Chapter H and in Blum & Wurm (2008). Here we shortly review the general properties of these experiments concentrating on the microphysics of aggregates.

Monomers are solid bodies which serve as building blocks of aggregates. Often these monomers are represented as spheres for simplicity, but in general they can have any shape, e.g., ellipsoids or irregular structures. If we assume that the monomers are electrically neutral and non-magnetic, these monomers stick together via surfaces forces (dielectric van der Walls forces for silicates and molecular dipole interactions for ices). The strength of the bond between the monomers can be characterized by the contact force, which can be measured in the lab (Heim et al., 1999) and it is given by

(3) |

where is the specific surface energy and is the local radius of surface curvature (for monomers of different size it is , where and are the radii of the monomers). Poppe et al. (2000) measured the maximum velocity for sticking between monomers of different sizes impacting onto a smooth surface with different velocities and found that this threshold velocity is around 1 m/s for micron sized silicates, and that it is decreasing with increasing size.

At intermediate relative velocities, the collision energy of the aggregates can be dissipated via restructuring and the two aggregates stick together. The most effective channel for restructuring is the rolling of two monomers (Dominik & Tielens, 1997; Wada et al., 2007, 2008; Paszun & Dominik, 2009). Heim et al. (1999) also measured the rolling energy between two monomers, which is the energy needed to roll a monomer by 90 degrees, and found that it linearly depends on the local surface curvature of the monomers.

If we want to break a contact by pulling the monomers apart, we have to pull the two monomers with a force that is higher than the contact force. The interesting feature of the contact area is that it can be stretched further apart such that the distance of the center of mass coordinates of the monomers can actually be larger than the sum of the radii. The elastic energy stored in this ‘neck’ is then transformed into elastic waves and slowly dissipates. Two monomers can also twist and slide, but these motions seem to be less important in collisions (Dominik & Tielens, 1997).

Most of these results were obtained using silicate spheres that are smaller than 1 micron in size, thus a simple shape and a mono-disperse size distribution were used. The picture is more complicated if we assume a size distribution of monomers (Dominik & Tielens, 1997) or irregular monomer shapes (Blum, 2004). Although we have a qualitative picture how these effects change the particle properties, no exhaustive and general investigations were made. There is also uncertainty regarding the properties of different monomer-materials. It is experimentally not studied how micrometer-sized ice monomers or monomers with organic mantels behave. Kouchi et al. (2002) performed experiments with millimeter sized grains with organic mantel and found that the sticking threshold velocity was several orders of magnitude higher than for same sized silicate grains. Frost mantel also increases the sticking threshold velocity but to a lesser extent than organic mantels (Hatzes et al., 1991).

### 4 The importance of dust - theoretical considerations

The growth of the particles is governed by the Smoluchowski equation, which is introduced and described in Chapter D. As discussed earlier in this chapter, the particles collide and grow because they have a relative velocity in the gas disk. The strength of the relative velocity depends on the aerodynamical properties (the stopping time) of the aggregates. These properties, as well as the different sources of relative velocities are discussed in Chapters N.2 and T.3.

It is clear from the previous section that the observations of disks are strongly influenced by the properties of the dust. In this section we review what role the dust plays in various aspects of theoretical modeling of disks and planet formation.

Dust is the main source of opacity where the temperature is below the evaporation temperature of the dust (below 1500 K). For an individual dust particle the cross-section to radiation at a given wavelength will depend on the particle size, structure (fractal or non-fractal, spherical or more complex shape) and composition. The total opacity at a given location also depends on the particle size distribution. Semenov et al. (2003) describes how to calculate the Rosseland mean opacities for particles as a function of temperature, assuming that the particles follow a modified MRN size distribution (Pollack et al., 1985) and that these particles are homogeneous and spherical. It is also possible to calculate opacities of arbitrarily complex aggregates (see e.g. Shen et al. (2008); Min et al. (2007)). However, usually the uncertainty coming from the amount and the size of the dust particles is higher than the uncertainty in opacities. Therefore, in most cases approximate opacity formulae are useful.

The dust also effects the ionization state of the protoplanetary disk. The sources of ionization can be thermal (from the central star), and non-thermal (X-rays, cosmic rays, Al decay). Thermal ionization is effective at the inner disk ( AU from the central star). X-rays and cosmic rays can ionize a layer of thickness with 100 column density on both sides of the disk. The decay of Al provides an ionization source that is present at every location in the disk. Dust affects ionization in two ways. It acts as charge carrier and it provides surface where electrons and ions can recombine (Okuzumi, 2009).

Although the ionization fraction is almost negligible, it has a very important affect for angular momentum redistribution in disks. To simulate viscous spreading, usually the description by Shakura & Sunyaev (1973) is used, where the turbulent viscosity is parameterized. Although this model is widely used, it does not explain what the physical source of the turbulent viscosity are. The best candidate-mechanism is currently the magneto-rotational instability (MRI), which was first discussed in the context of disks by Balbus & Hawley (1991). The basis of the instability is that even low ionization fractions make it possible to couple the magnetic field and the gas dynamically. If there are two fluid parcels orbiting at different radii, the magnetic field acts as a spring between these two fluid parcels. This force causes the inner one to be slowed down by the outer one, and the outer one to be sped up by the inner one. As the inner one loses energy its orbit shrinks, while the outer one gains energy and moves out. During this process the distance, thus the force between these parcels increases further and the process runs away.

There can be a region in the disk (typically between 0.2 and 4 AU – D’Alessio et al. (1998)), where the ionization fraction is smaller than the minimum value required for the MRI. Therefore, Gammie (1996) proposed a layered accretion disk model, in which a ‘dead zone’ is present at the midplane of the disk. The dead zone has suitable properties for planet formation. A pressure bump is present at the edges of the dead zones, where dust particles can be accumulated and planetesimals can be formed (e.g. Lyra et al. (2008); Dzyurkevich et al. (2010).

Shear box simulations of MRI-driven turbulence showed that dust particles of decimeter in size can be efficiently concentrated in the turbulent eddies. This way planetesimals can be formed directly from decimeter sized bodies avoiding all size regimes in between (Johansen et al., 2007).

We restricted the discussion on MRI as a possible driving mechanism for angular momentum redistribution within disks, but there are several other candidates, like the baroclinic instability (Klahr & Bodenheimer, 2003), shear instability (Dubrulle et al., 2005), gravitational spiral waves (Pickett et al., 2003), or global magnetic fields threading the disk (Stehle & Spruit, 2001).

Dust particles are important for chemistry in disks. From the chemical point of view, disks can be divided into three regions according to the height above the midplane. These are the photon dominated layer, warm molecular layer, and the midplane layer (Bergin, 2009). In these layers different type of chemistry are dominating, which are the gas phase chemistry (at the hot and highly ionized photon dominated layer), gas-grain chemistry and grain surface chemistry (in the colder regions of the disk). We also have to distinguish the inner and outer disks (Semenov et al., 2010). In the inner disk, chemical equilibrium is reached within yrs. However, the material in the outer disk can have much longer reaction timescales ( yrs) due to the low densities and temperatures, thus kinetic chemistry must be used to obtain the abundance of different chemical species.

### 5 The outline of the thesis

This thesis investigates the initial growth of dust particles using a Monte Carlo (MC) model which includes a collision model that is based on the currently known laboratory experiments on dust aggregates. Chapter \thechapter describes the basic numerical method which is used in this thesis. We test the method by comparing the results against analytical solutions and highlight the strengths and weaknesses of the MC method.

Chapter \thechapter describes the laboratory based collision model. The assumptions, fitting formulas and parameters, and extrapolations that enter the model are discussed in detail.

Chapter \thechapter combines the results of the previous chapters by incorporating the collision model into the MC code. In this chapter, local box simulations are performed at the midplane of three different disk models at 1 AU distance from a solar mass star. In this chapter we show the importance of bouncing collisions and how dust evolution is altered compared to previous simpler collision models.

Chapter \thechapter is the extension of the previous 0D models. In this chapter we investigate the settling and growth of dust particles in a 1D vertical column of disks.

Finally, we review the future prospects of this field in Chapter \thechapter.

## Chapter \thechapter A representative particle approach to the coagulation of dust particles

Based on ‘A representative particle approach to coagulation and fragmentation of dust aggregates and fluid droplets’ by A. Zsom & C. P. Dullemond published in A&A, 489, 931.

## Appendix A Introduction

Dust particle aggregation is a very common process in various astrophysical settings. In protoplanetary disks the aggregation of dust particles forms the very initial step of planet formation (see e.g. Dominik et al. 2007). It also modifies the optical properties of the disk, and it has influence on the chemistry and free electron abundance in a disk (Sano et al. 2000; Semenov et al. 2004; Ilgner & Nelson 2006). The appearance and evolution of a protoplanetary disk is therefore critically affected by the dust aggregation process. In sub-stellar and planetary atmospheres the aggregation of dust particles and the coagulation of fluid droplets can affect the structure of cloud layers. It can therefore strongly affect the spectrum of these objects and influence the local conditions within these atmospheres. The process of aggregation/coagulation and the reverse process of fragmentation or cratering are therefore important processes to understand, but at the same time they are extremely complex.

Traditional methods solve the Smoluchowski equation for the particle mass distribution function , where is defined such that denotes the number of particles per cubic centimeter with masses in the interval . This kind of method has been used in many papers on dust coagulation before (e.g. Nakagawa et al. 1981; Weidenschilling 1984; 1997; Schmitt et al. 1997; Suttner & Yorke 1999; Tanaka et al. 2005; Dullemond & Dominik 2005; Nomura & Nakagawa 2006). Methods of this kind are efficient, but have many known problems. First of all a coarse sampling of the particle mass leads to systematic errors such as the acceleration of growth (Ohtsuki et al. 1990). High resolution is therefore required, which may make certain problems computationally expensive. Moreover, if one wishes to include additional properties of a particle, such as porosity, charge, composition etc, then each of these properties adds another dimension to the problem. If each of these dimensions is sampled properly, this can quickly make the problem prohibitively computationally expensive. Finally, the traditional methods are less well suited for modeling stochastic behavior of particles unless this stochastic behavior can be treated in an averaged way. For instance, in protoplanetary disks if the stopping time of a particle is roughly equal to the turbulent eddy turn-over time, then the velocity of a particle with respect to the gas is stochastic: at the same location there can exist two particles with identical properties but which happen to have different velocities because they entered the eddy from different directions (see e.g. the simulations by Johansen et al. 2006).

To circumvent problems of this kind Ormel et al. (2007) have presented a Monte Carlo approach to coagulation. In this approach the particles are treated as computational particles in a volume which is representative of a much larger volume. The simulation follows the life of particles as they collide and stick or fragment. The collision rates among these particles are computed, and by use of random numbers it is then determined which particle collides with which. The outcome of the collision is then determined depending on the properties of the two colliding particles and their relative impact velocity. This method, under ideal conditions, provides the true simulation of the process, except that random numbers are used in combination with collision rates to determine the next collision event. This method has many advantages over the tradiational methods. It is nearly trivial to add any number of particle properties to each particle. There is less worry of systematic errors because it is so close to a true simulation of the system, and it is easy to implement. A disadvantage is that upon coagulation the number of computational particles goes down as the particles coagulate. Ormel et al. solve this problem by enlarging the volume of the simulation and hence add new particles, but this means that the method is not very well suited for modeling coagulation within a spatially resolved setting such as a hydrodynamic simulation or a model of a protoplanetary disk.

It is the purpose of this chapter to present an alternative Monte Carlo method which can quite naturally deal with extremely large numbers of particles, which keeps the number of computational particles constant throughout the simulation and which can be used in spatially resolved models.

## Appendix B The method

#### b.1 Fundamentals of the method

The fundamental principle underlying the method we present here is to follow the behavior of a limited number of representative particles whose behavior is assumed to be a good representation of all particles. In this approach the number of physical particles can be arbitrarily large. In fact it should be very much larger than the number of representative particles , so that the chance that one representative particle collides with another representative particle is negligible compared to the collisions between a representative particle and a non-representative particle. In other words, if , we only need to consider collisions between a representative particle and a non-representative particle. The number of collisions among representative particles is too small to be significant, and the collisions among non-representative particles are not considered because we focus only on the behavior of the representative particles.

Suppose we have a cloud of dust with physical particles, with a specific size distribution, for instance, MRN (Mathis, Rumpl & Nordsieck 1977). Let the total mass of all these particles together be and the volume be . We randomly pick particles out of this pool, where is a number that can be handled by a computer, for instance, . Each representative particle has its own mass and possibly other properties such as porosity or charge assigned to it. We now follow the life of each of these particles. To know if representative particle collides with some other object, we need to know the distribution function of all physical particles with which it can collide. However, in the computer we only have information about the representative particles. We therefore have to make the assumption that the distribution function set up by the representative particles is representative of that of the physical particles. We therefore assume that there exist

(4) |

physical particles per cubic cm with mass , porosity , charge etc., and the same for each value of , including . In this way, by assumption, we know the distribution of the physical particles from our limited set of representative particles. One could say that each representative particle represents a swarm of physical particles with identical properties as the representative one. One could also say that the true distribution of particles is, by assumption, that of the representative ones. The rate of collisions that representative particle has with a physical particle with mass etc. is then:

(5) |

where is the cross-section for the collision between particles with properties and , and is the average relative velocity between these particles. The total rate of collisions that representative particle has with any particle is then:

(6) |

and the total rate of collisions of any representative particle is

(7) |

The time-evolution of the system is now done as follows. Let be the current time. We now randomly choose a time step according to:

(8) |

where ran(seed) is a random number uniformly distributed between 0 and 1. This means that a collision event happens with one of the representative particles at time . The chance that the event happens to representative particle is:

(9) |

So we can choose, using again a random number, which representative particle has undergone the collision event. We now need to determine with what kind of physical particle it has collided. Since the distribution of physical particles mirrors that of the representative ones, we can write that the chance this particle has collided with a physical particle with properties is:

(10) |

With another random number we can thus determine which is involved in the collision. Note that can be as well, i.e. the representative particle can collide with a physical particle with the same properties, or in other words: a representative particle can collide with a particle of its own swarm of physical particles.

Now that we know what kind of collision has happened, we need to determine the outcome of the collision. The most fundamental part of our algorithm is the fact that only representative particle will change its properties in this collision. Physical particle would in principle also do so (or in fact becomes part of the new representative particle), but since we do not follow the evolution of the physical particles, the collision will only modify the properties of representative particle . By assumption this will then automatically also change the properties of all physical particles associated with representative particle . Statistically, the fact that the particles are not modified is “corrected for” by the fact that at some point later the representative particle will have a collision with physical particle , in which case the properties of the particles will be modified and not those of . This then (at least in a statistical sense) restores the “symmetry” of the interactions between and . If the collision leads to sticking, then the resulting particle will have mass . This means that representative particle will from now on have mass . Representative particle is left unaffected as it is not involved in the collision. The interesting thing is now that, because by assumption the representative particle distribution mirrors the real particle distribution, the swarm of physical particles belonging to the modified representative particle now contains fewer physical particles, because the total dust mass of the swarm remains constant.

If a collision results in particle fragmentation, then the outcome of the collision is a distribution function of debris particles. This distribution function can be written as a function of debris particle mass, such that

(11) |

and the function has to be determined by laboratory experiments or detailed computer simulations of individual particle collisions (see Dominik et al. 2007 for a review). The new value of for the representative particle is now randomly chosen according to this distribution function by solving the equation

(12) |

for and assigning . In other words: we randomly choose a particle mass from the debris mass distribution function, i.e. the choice is weighed by fragment mass, not by fragment particle number. This can be understood by assuming that the true representative particle before the collision is in fact just a monomer inside a larger aggregate. When this aggregate breaks apart into for instance one big and one small fragment it is more likely that this representative monomer resides in the bigger chunk than in the smaller one.

After a fragmenting collision the will generally be smaller than before the collision. This means that the number of physical particles belonging to representative particle increases accordingly. Note that although the collision has perhaps produced millions of debris particles out of two colliding objects, our method only picks one of these debris particles as the new representative particle and forgets all the rest. Clearly if only one such destructive collision happens, the representative particle is not a good representation of this entire cloud of debris products. But if hundreds such collisions happen, and are treated in the way described here, then the statistical nature of Eq. (12) ensures that the debris products are well represented by the representative particles.

The relative velocity can be taken to be the average relative velocity in case of random motions, or a systematic relative velocity in case of systematic drift. For instance, for Brownian motion there will be an average relative velocity depending on the masses of both particles, but differential sedimentation in a protoplanetary disk or planetary atmosphere generates a systematic relative velocity. Also, for the Brownian motion or turbulent relative velocity one can, instead of using an average relative velocity, choose randomly from the full distribution of possible relative velocities if this is known. This would allow a consistent treatment of fluctuations of the relative velocities which could under some circumstances become important (see e.g. Kostinski & Shaw 2005).

#### b.2 Computer implementation of the method

We implemented this method in the following way. For each of the representative particles we store the mass and all other properties such as porosity, charge, composition etc. Before the start of the Monte Carlo procedure we compute the full collision rate matrix , and we compute the as well as . For these collision pairs we now have to determine the cross section of particles as well as their systematic relative velocity, such as different drift speeds, and the random relative velocity, such as Brownian or turbulent motion. The random motions can be determined with a random number from the relative velocity probability distribution function if that is known. If that is not known in sufficient detail, one can also take it to be the average relative velocity, for which more often analytic formulae exist in the literature.

We determine beforehand at which times we want to write the resulting and other parameters to a file. The simulation is now done in a subroutine with a do-while loop. We then determine using a random number (see Eq. 8), and check if , where is the next time when the results will have to be stored. If , then a collision event occured before . We will handle this event according to a procedure described below, we set and then return to the point where a new is randomly determined. If, on the other hand, then we stop the procedure, return to the main program and set . The main program can then write data to file and re-call the subroutine to a time or stop the simulation altogether. Note that when the subroutine is called again for a next time interval, it does not need to know the time of the previously randomly determined event which exceeded . Of course, one could memorize this time and take that time as the time of the next event in the next time interval, but since the events follow a Poisson distribution, we do not need to know what happened before to randomly determine the new time of the next event.

Now let us turn to what happens if a collision event occurs, i.e. occurs between time and . We then first determine which representative particle is hit, which is done by generating a random number and choosing from the probability distribution of collision rates, as described in Section B.1. Similarly we determine the non-representative particle with which it collides, or in other words: we determine the index of the “swarm” in which this non-representative particle resides. Finally, we must determine the impact parameter of the collision, or assume some average impact parameter.

Now we employ a model for the outcome of the collision. This is the collision subroutine of our Monte Carlo method. It is here where the results of laboratory experiments come in, and the translation of such experiments into a coagulation kernel is a major challenge which we do not cover here. The collision model must be a quick formula or subroutine that roughly represents the outcomes of the detailed laboratory collision experiments or detailed numerical collision models. It will give a probability function for the outcoming particle masses and properties. From this distribution function we pick one particle, and from this point on our representative particle will attain this mass and these properties. The collision partner will not change, because it is a non-represetative particle from that swarm that was involved in the collision, and we do not follow the life of the non-representative particles. We therefore ignore any changes to that particle.

We now must update for all with fixed and for all with fixed : we update a row and a column in the matrix. Having done this, we must also update for all . This would be an process, which is slow. But in updating we know the difference between the previous and the new value, and we can simply add this difference to for each . Only for we must recompute the full again, because there all elements of that row have been modified. Using this procedure we assure that we limit the computational effort to only the required updates.

#### b.3 Acceleration of the algorithm for wide size distributions

One of the main drawbacks of the basic algorithm described above is that it can be very slow for wide size distributions. Consider a swarm of micron sized dust grains that are motionless and hence do not coagulate among each other. Then a swarm of meter sized boulders moves through the dust swarm at a given speed, sweeping up the dust. Let us assume that also the boulders are not colliding among each other. The only mode of growth is the meter-sized boulders sweeping up the micron sized dust. For the boulder to grow a factor of 2 in mass it will have to sweep up micron sized dust particles. Each impact is important for the growth of the boulder, but one needs such hits to grow the boulder a factor of 2 in mass. The problem with the basic algorithm described above is that it is forced to explicitly model each one of these impacts. This is obviously prohibitively expensive.

The solution to this problem lies in grouping collisions into one. Each impact of a dust grain on a boulder only increases the boulder mass by a minuscule fraction. For the growth of the boulder it would also be fine to lower the chance of an impact by , but if it happens, then particles impact onto the boulder at once. Statistically this should give the same growth curve, and it accelerates the method by a huge factor. However, it introduces a fine-tuning parameter. We must specify the minimum increase of mass for coagulation (). If we set to, for instance, 10%, then we may expect that the outcome also has errors of the order of 10%. This error arises because by increasing the mass of the bigger body in steps of 10%, we ignore the fact that the mass at some time should in fact be somewhere in between, which cannot be resolved with this method. This is, however, not a cumulative error. While the mass of the bigger body may sometimes be too low compared to the real one, it equally probably can be too large. On average, by the Poisson nature of the collision events, this averages out. But it is clear that the smaller we take this number, the more accurate it becomes – but also the slower the method becomes. It is therefore always a delicate matter to choose this parameter, but for problems with a large width of the size distribution this acceleration is of vital importance for the usability of the method.

#### b.4 Including additional particle properties

We mentioned briefly the possibility of adding more particle properties to each representative particle. This is very easy to do, and it is one of the main advantages of a Monte Carlo method over methods that directly solve the integral equations for coagulation. One of the main properties of interest to planet formation is porosity or fractal structure of the aggregate. Two aggregates with the same mass can have vastly different behavior upon a collision if they have different compactness. A fluffy aggregate may break apart already at low impact velocities while a compact aggregate may simply bounce. Upon collisions these properties may in fact also change. Ormel et al. (2007) studied the effect of porosity and how it changes over time, and they also used a Monte Carlo approach for it.

If one wishes to include particle properties in a traditional method which solves the integral equations of coagulation (the Smoluchowski equation), then one increases the dimensionality of the problem by 1 for each property one adds. With only particle mass one has a distribution function while adding two particle properties and means we get a distribution function , making it a 4-dimensional problem. Methods of this kind must treat the complete phase space spanned by . This is of course possible, but computationally it is a very challenging task (see Ossenkopf 1993). In contrast, a Monte Carlo method only sparsely samples phase space, and it samples it only there where a significant portion of the total dust mass is. A Monte Carlo method focuses its computational effort automatically there where the action is. The drawback is that if one is interested in knowing the distribution function there where only very little mass resides, then the method is inaccurate. For instance, in a protoplanetary disk it could very well be that most of the dust mass is locked up in big bodies (larger than 1 meter) which are not observable, and only a promille of the dust is in small grains, but these small grains determine the infrared appearance of the disk because they have most of the solid surface area and hence most of the opacity. In such a case a Monte Carlo method, by focusing on where most of the mass is, will have a very bad statistics for those dust grains that determine the appearance of the disk. For such goals it is better to use the traditional methods. But if we are interested in following the evolution of the dominant portion of the dust, then Monte Carlo methods naturally focus on the interesting parts of phase space.

## Appendix C Discussion of the method

#### c.1 Conservation of particle number

There are a few peculiarities of the method described here that may, at first sight, appear inconsistent, but are statistically correct. For instance if we return to our example of a swarm of tiny particles and a swarm of boulders, i.e. with representative particle 1 being a micron sized particle and representative particle 2 being a meter sized particle, then we encounter an apparent paradox. We again assume that collisions only take place between 1 and 2, but not between 1 and 1 or 2 and 2. The chance that representative particle 1 hits a meter size particle is much smaller than the chance that representative particle 2 hits a micron size particle. What will happen is that representative particle 2 will have very many collision events with small micron size grains, and thereby slowly and gradually grows bigger, while representative particle 1 will only have a collision with big particle after a quite long time and immediately jumps to that big size. While representative particle 2 grows in mass, the number of big physical particles decreases in order to conserve mass. This may seem wrong, because in reality the number of big boulders stays constant, and these boulders simply grow by sweeping up the small dust. The solution to this paradox is that the average time before representative particle 1 hits a big () particle is of the same order as the time it takes for representative particle 2 to grow to twice its mass by collecting small particles. So, very roughly, by the time the big particle has doubled its mass, and therefore the number of physical particles belonging to has reduced by 50%, the representative particle has turned into a big particle, corresponding, statistically, to the other 50% of big particles that was missing. If we are a bit more precise, the statistics do not add up precisely in this way if we have only 1 swarm of small and 1 swarm of big bodies. If, however, one has N swarms of small and N swarms of big bodies, and again assume that only the big bodies can sweep up the smaller ones, then if the statistics adds up perfectly: one finds that after all the growth has taken place, the average mass of the bodies is twice that of the original big bodies. In Figure 1 we do precisely this experiment, and the left panel shows that for the mass distribution of the big bodies averages to the right value, albeit with a spread of 10% FWHM while in reality this spread should be 0. The right panel shows how the average final mass depends on . For small the statistics clearly do not add up, but for large they do and produce the right value (final mass is twice initial mass of the big bodies). So statistically the number of big particles is restored to the correct value, but there is then unfortunately still a large statistical noise on it. The particle number is therefore not exactly conserved in our method, but statistically it is.

#### c.2 The number of representative particles

It is obvious that for high number of representative particles we will get better results than for low . But there are two issues here. First of all, the higher , the better the representative particles represent the true physical distribution of particles. For problems that result in wide size distributions this is all the more crucial. An inaccurate representation of the true size distribution could lead to systematic errors. But another reason for taking a high is simply because we want our end-result to have as little as possible noise. If the result is too noisy, then it is useless. Taking too big, however, makes the code slow because more representative particles have to be followed, and for each of these particles we must check for a larger number of possible collision partners . The problem scales therefore as . If the expected size distributions are not too wide, one can use an intermediately large number of representative particles, say , for the simulation, but redo the simulation times such that , and average the results of all simulations. This approach was also used by Ormel et al. (2007). This gives the same amount of noise on the end-result, but scales as , which is times faster than the scaling. This works, however, only if the coagulation/fragmentation kernel is not too sensitive to the exact distribution of collision partners.

Interestingly, if the kernel is very insensitive to the exact distribution of collision partners, then, in principle, one could run the model with only a single representative particle , because the collision partner of representative particle could be equal to . Of course, a single representative particle means that we assume that all physical particles have the same size, or in other words: that we have an infinitely narrow size distribution.

To decide about the sufficient number of representative particles, one has to compare the results of the MC code with the analytical solutions of the three test kernels (see Section D). In a given time of the simulation the mean mass and the shape of the distribution function for all three test kernels must be followed accurately. It is especially important to reproduce the linear and product kernels accurately as the realistic kernels of dust particles are similar to these.

Of course the progression from ’not sufficient’ and ’sufficient’ number of representative particles is smooth and in general the more representative particles we use, the more accurate the produced result will be. The sufficient number of representative particles () as given in Section D) are only suggestions, the error of the distribution functions were not quantified.

#### c.3 Limitations of the method

One of the fundamental limitations of the method described here is that we assume . We can model the growth of particles by coagulation in a protoplanetary disk or in a cloud in a planetary atmosphere, but we can not follow the growth to the point where individual large bodies start to dominate their surroundings. For instance, if we wish to follow the growth of dust in a protoplanetary disk all the way to small planets, then the method breaks down, because is then no longer much bigger than , and interactions among representative particles become likely. Also, for the same reason, run-away growth problems such as electrostatic gelation (Mokler 2007) cannot be modeled with this method.

Another limitation is encountered when modeling problems with strong growth and fragmentation happening at the same time. This leads to very wide size distributions, and the typical interval between events is then dominated by the smallest particles, whereas we may be interested primarily in the growth of the biggest particles. In such a situation a semi-steady-state can be reached in which particles coagulate and fragment thousands of times over the life time of a disk. The Monte Carlo method has to follow each of these thousands of cycles of growth and destruction, which makes the problem very “stiff”. Methods using the integral form of the equations, i.e. the Smoluchowski equation, can be programmed using implicit integration in time so that time steps can be taken which are much larger than the typical time scale of one growth-fragmentation cycle without loss of accuracy (Brauer et al. 2008). This is not possible with a Monte Carlo method.

## Appendix D Standard tests and results

In this section we test our coagulation model with kernels that have analytical solutions. Furthermore we show the first results of applying this model to protoplanetary disks introducing Brownian motion and turbulence induced relative velocities as well as a new property of dust particles namely the porosity (or enlargement factor, see Ormel et al 2007), and a simple fragmentation model.

To follow dust coagulation and fragmentation, one has to follow the time evolution of the particle distribution function at a given location in the disk (), where contains the modeled properties of the dust grains, in our case these will be the mass () and the enlargement factor (), .

In most of the coagulation models so far the only used dust-property was the particle mass. Then one can use the so called Smoluchowski equation (Smoluchowski (1916)) to describe the time-evolution of :

(13) |

The first term on the right hand side represents the loss of dust in the mass bin by coagulation of a particle of mass with a particle of mass . The second term represents the gain of dust matter in the mass bin by coagulation of two grains of mass and . is the coagulation kernel, it can be written as

(14) |

the product of the the cross-section of two particles and their relative velocity. We consider all the three kernels for which there exist analytical solutions: The constant kernel (), the linear kernel () and the product kernel (). The analytical solutions are described e.g. in Ohtsuki et al. (1990) and Wetherill (1990).

We test our method against these three kernels, leaving the enlargement factor unchanged, always unity. Further important properties of the dust particles, such as material density and volume density, are also always unity. The (dimensionless) time evolution of the swarms is followed and at given times the particles are binned by mass so that we can produce . On Figures 2 and 3 the y axis shows , the mass density per bin. The analytical solutions, taken from Ohtsuki et al. (1990) and Wetherill (1990), are overplotted with dashed line. The number of particles were chosen to be , , so altogether 1000 representative particles were used in the model except for the product kernel where more representative particles were used to achieve better results.

In the case of the constant kernel (Figure 2), we started our simulation with MRN size distribution (), the results were saved at . It is interesting to note that this kernel is not sensitive to the initial size distribution. As the system evolves, it forgets the initial conditions. Another interesting property of this kernel that our model can reproduce the analytical solution even with very limited number of representative particles (even for !) but of course with higher noise. It is possible to use only one representative particle, which means that the representative particle collides with particles from its own swarm which basically results in pure CCA growth (Cluster-Cluster Aggregation). Interestingly, the mean mass of the distribution function is followed correctly but the shape of the function changes, additional spikes appear on it.

The linear kernel is known to be more problematic because the mean mass of the particles grows exponentially with time. Our model, however reproduces this kernel very well, too, as it can be seen in Figure 3. The results were saved at . We note that using low number of representative particles with this kernel also works relatively well, the minimum number of swarms needed to reproduce the exponential time evolution of the mean mass is . This is larger than for the constant kernel. It shows that for the linear kernel collisions between particles of unequal mass are contributing significantly to the growth, whereas for the constant kernel the growth is dominated by collisions between roughly equal size particles. Using results in distorted distribution function: neither the mean mass nor the actual shape of the distribution function is correct.

The product kernel is the hardest to reproduce. The peculiarity of this kernel is the following: Using dimensionless units, a ’run-away’ particle is produced around , which collects all the other particles present in the simulation (Wetherill 1990). The difficulty arises in our Monte Carlo code when the mass of the representative ’run-away’ particle reaches the mass of its swarm. In other words, the number of physical particles belonging to the representative ’run-away’ particle is close to unity. In this case the original assumption of our method (we only need to consider collisions between a representative particle and a physical particle) is not valid anymore. However, as Figure 4 shows, we can relatively well reproduce this kernel before . In the case of this kernel, we need approximately representative particles to correctly reproduce it.

The required CPU time for these test cases is very low, some seconds only.

We conclude that our Monte Carlo method reproduces the constant and linear test kernels without any problem even with low number of representative particles. On the other hand the method has difficulties with the product kernel, but before the formation of the ’run-away’ particle, we can reproduce the kernel. The relatively low number of representative particles needed to sufficiently reproduce the test kernels is very important for future applications where whole disk simulations will be done and there will likely be regions containing low numbers of particles.

## Appendix E Applications to protoplanetary disks

We use the Monte Carlo code to follow the coagulation and fragmentation of dust particles in the midplane of a protoplanetary disk at 1 AU from the central star. Our disk model is identical with the one used by Brauer et al. (2007). We proceed step by step. First relative velocities induced by Brownian motion and turbulence without the effects of porosity are included (Sec. E.1).

The next step is to include a fragmentation model (Sec. E.2).

In the final step porosity is included (Sec. E.3). We use the porosity model described in Ormel et al. (2007). At this point we compare and check again our code with Ormel et al. (2007) using their input parameters but not including the rain out of particles.

#### e.1 Relative velocities

We include two processes in calculating the relative velocities: Brownian motion and turbulence.

Brownian motion strongly depends on the mass of the two colliding particles. The smaller their masses are, the more they can be influenced by the random collisions with the gas molecules/atoms. One can calculate an average velocity given by

(15) |

For micron sized particles, relative velocity can be in the order of magnitude of 1 cm/s, but for cm sized particles this value drops to cm/s. If growth is only governed by Brownian motion, it leads to very slow coagulation, a narrow size distribution and fluffy dust particles, so called cluster-cluster aggregates (CCA).

The gas in the circumstellar disk is turbulent, thus the dust particles experience acceleration from eddies with different sizes and turnover times. This process is very complex, but Ormel and Cuzzi (2007) provided limiting closed-form expressions for average relative turbulent velocities between two dust particles. Their results are also valid for particles with high Stokes numbers. They distinguished three regimes: a.) the stopping times of both dust particles are smaller than the smallest eddy-turnover time (, tightly coupled particles); b.) the stopping time is between the smallest and largest turnover time (, intermediate regime); c.) the stopping time is bigger than the largest turnover time (, heavy particles). For details see Ormel and Cuzzi (2007). We used for the turbulence parameter.

To illustrate the relative velocity of dust particles without the effects of porosity, we provide Figure 5. This contour plot includes Brownian motion and turbulent relative velocities. The Brownian motion is negligible for particles bigger than cm.

#### e.2 Fragmentation model

The collision energy of the particles is

(16) |

where is the reduced mass. We need to define some quantities of the dust particles. is the rolling energy of two monomers. For monomers of the same size it is given by (Dominik & Tielens 1997; Blum & Wurm 2000)

(17) |

where is the monomer radius, is the rolling force measured by Heim et al. (1999). Its value is dyn for SiO spheres.

The fragmentation energy is then defined as follows:

(18) |

where is the total number of contact surfaces between monomers (for simplicity it is taken to be 3N, where N is the number of monomers in the particle), is the energy needed to break the bond between two monomers (its order of magnitude is similar to for these parameters).

If the collision energy of two particles is higher than the corresponding fragmentation energy, then the aggregate is destroyed and monomers are produced. Note that although assuming a complete destruction of the collided dust particles, we are interested in the critical energy where the first fragmentation event happens. This is the reason why the fragmentation energy is assumed to be lower than the energy needed for catastrophic fragmentation. It is a simplification of the model to assume that the debris particles will be monomers. This is a very simplified fragmentation model used previously by Dullemond & Dominik (2005). A more realistic model would be the one used by Brauer et al. (2007).

We show the fragmentation barrier in Figure 5 with black lines. If collision happens in the regime between these two lines, that results in fragmentation.

**
Results** A simulation was made including these effects in a specified location of the
disk. We choose the location to be 1 AU distance from the central solar type
star. Using the disk model of Brauer et al. (2007), the temperature at this
distance is approximately 200 K, the density of the gas is cm, the gas-to-dust ratio is 100 and we choose the
turbulent parameter to be , the Reynolds number is
(based on Ormel & Cuzzi 2007). The dust
monomers have the following properties: the monomer radius is m, material density is g/cm. With the used parameters
the fragmentation velocity is m/s, though it is
somewhat larger for equal sized agglomerates. It is important to note that
this value is very sensitive to the monomer radius () and material
density (), because smaller/lighter monomers mean more contact
surfaces (higher N for the same mass) and therefore higher fragmentation
energy.

Using these input parameters we simulated the evolution of the dust particles for years so that we reach an equilibrium between coagulation and fragmentation. Figure 6 shows the resulting normalized size distributions in times after , , and years. We used particles averaging over times ( particles altogether). The required CPU time to perform this simulation is 1.5 hours approximately. is set to be 0.001 from now on in every simulation. We would like to note that giving (Section B.3) a higher value would decrease the CPU time.

One can see that coagulation happens due to Brownian motion in the beginning of the simulation (until years) but after that turbulence takes over and the first fragmentation event happens after roughly years. After this event the ”recycled” monomers start to grow again, but as we see in Figure 5, particles can not reach bigger sizes than 0.07 cm.

We would like to draw attention to the sudden decrease of particles around 0.002 cm in Figure 6. This is the result of the turbulent relative velocity model used here (discussed in Sect. E.1). At this point the particles leave the ’tightly coupled particles’ regime and enter the ’intermediate’ regime. But the transition in relative velocity between these regimes is not smooth, there is a jump in relative velocity from 20 cm/s to 60 cm/s. As a result, particles coagulate suddenly faster and leave this part of the size distribution rapidly. Similar ’valleys’ can be seen in the following figures with porosity, but the feature is less distinct as the stopping times can be different for particles with same mass.

#### e.3 Porosity

To be able to quantitatively discuss the effect of porosity, we have to define the enlargement parameter following the discussion of Ormel et al. (2007). If is the extended volume of the grain and is the compact volume, than one can define the enlargement parameter () as

(19) |

Compact volume is the volume occupied by the monomers not taking into account the free space between the monomer spheres. One can think of it as melting all the monomers into a single sphere, the volume of this sphere is the compact volume. We use compact radius later on, which is the radius of this sphere. In the previous section the mass/volume ratio was constant for the particles. Therefore we could automatically calculate the mass of the particle if the radius was known or vice-versa. But from now on a particle with given mass can have a wide range of effective radii depending on its enlargement parameter.

It is essential to know how the enlargement parameter changes upon collisions. We have to refine our fragmentation model and introduce two more regimes regarding to collision energy. We use the model of Ormel et al. (2007) and we only summarize their model here.

The first regime is the low collision energy regime, where the collision energy is smaller than the restructuring energy (, where ), meaning that the particles stick where they meet, the internal structure of the grain does not change.

The recipe for the resulting enlargement factor after the collision of two particles assuming that then is

(20) |

where is the mass averaged enlargement factor of the colliding particles:

(21) |

Furthermore is the CCA-characteristic exponent calculated by detailed numerical studies such as Paszun & Dominik (2006) (). is a necessary additional factor for the enlargement factor (for details see Ormel et al. 2007):

(22) |

where is the monomer mass.

The second regime is the regime of compaction. The internal structure of the monomers inside the particle changes, this causes a decreasing porous volume. If the collision energy , we talk about compaction. In this case the porosity after the collision becomes

(23) |

where is the relative compaction. One can see that has to be smaller than unity otherwise in Eq. 23 becomes less than unity. But it can theoretically happen that . In this case, as long as the total collision energy remains below the fragmentation threshold, we assume that after compaction this excess energy goes back into the kinetic energy of the two colliding aggregates. The two aggregates therefore compactify and bounce, without exchanging mass or being destroyed. Bouncing is therefore included in this model, albeit in a crude way.

The third regime is fragmentation as it was discussed in the previous section (). We use the same fragmentation model as before so the result of a fragmenting collision are monomers.

**
Results** We performed a simulation with exactly the same initial conditions as in the
last section but we included the porosity as an additional dust property in
the model. The result can be seen in Figure 7 (the
required CPU time here is also 1.5 hours). One can immediately see that
including porosity increases the maximum particle mass by two orders of
magnitude (five times larger particles in radius). This was already expected
based on the work of Ormel et al. (2007), although due
to rain out of bigger particles, they did not simulate particles bigger than
0.1 cm.

We provide Figure 8 to give an impression how the porosity of the agglomerates change during the simulation. The x axis is the compact radius of the particles, the y axis is the ratio between the compact and the porous radii. This quantity is basically equal to . Fractal growth is important for small particles creating fluffy agglomerates (until cm approximately), after this point the relative velocities become high enough so compactness becomes important. Before the particles reach a fully compacted stage they fragment, become monomers and a new cycle of growth starts. It is important to note that the porosity of the aggregates before the first fragmentation event is usually higher than the porosity values after equilibrium is reached. This can be seen in Figure 8 (grains after 400 years and 3000 years). The reason is that before the first fragmentation event, particles involved in collisions are typically equal sized so these particles produce fluffy structures. However, when the distribution function relaxes in equilibrium, there are collisions between smaller and bigger aggregates as well which results in somewhat compacted aggregates.

**
Model comparison with Ormel et
al. (2007)** We compare our Monte Carlo code with the one developed by Ormel et
al. (2007). They use the Minimum Mass Solar Nebula
disk model (MMSN) and somewhat different dust-parameters which we changed
accordingly (distance from the central star = 1 AU, temperature = 280 K,
density of the gas is 8.5 10 g/cm, gas to dust ratio =
240, = 10; monomer radius = 0.1 m, monomer density = 3
g/cm, surface energy density of the monomers = 25 ergs/cm).

They follow particle coagulation at one pressure scale height above the midplane of the disk. Because of this if the particles reach a critical stopping time (, where is the Kepler frequency), the particles rain out meaning that these particles leave the volume of the simulation, the distribution function of the dust particles is collapsing as it can be seen in their figures (Figure 10 and 11 in Ormel et al. (2007)).

We do not include this effect in our model but we stop the simulation at the first rain out event and compare our distribution functions until this point. We use representative particles () during the simulation.

This can be seen at Figure 9. The reader is advised to examine this figure together with Figure 10. c. from Ormel et al. (2007) because this is the figure we reproduced here. Furthermore we would like to point out that the scale of the y axis is different in the two figures. Our figure shows two orders of magnitude from the normalized distribution functions whereas their figure covers more than 10 orders of magnitude from the real distribution function.

Keeping these in mind, one can compare the results of the two Monte Carlo codes.

The continuous lines at Figure 9 in this chapter show the distribution functions at t = 10 years (thin line), 100 years (thicker line), 1000 years (thickest line). The dotted line shows the distribution function at the time of the first compaction event (t = 1510 years), the dashed line shows the distribution function at the first rain out event (t = 2900 years). The same notation is used by Ormel et al. (2007) at Figure 10. c.

We compared the position of the peaks of the distribution functions and the approximate shape of the curves. We can conclude that our code reproduces the results of Ormel et al. (2007) very well.

The required CPU time to perform this simulation is only 10 minutes. One might ask why the CPU time is almost ten times smaller now? Why do the previous simulations, which used the same number of representative particles () and simulated approximately the same time interval (3000 years), take so long? The required CPU time does not scale linearly with the used number of particles. It scales linearly with the number of collisions simulated. The difference between this run and the previous two simulations is fragmentation. In the simulations of Ormel et al. (2007) no fragmentation is happening because the growth timescales are longer. Using our initial parameters, the first fragmentation event happens around 1000 years, the number of small particles are never completely depleted after this time. As the small particles thereafter are always present, the number of collisions will be much higher than before.

#### e.4 Monomer size distribution

An interesting question which can easily be answered with our method is: How the mixture of different sized monomers change the maximum agglomerate size which can be reached? As we can see from Equations 17 and 18, the rolling energy is lower for smaller monomers and of course the number of monomers in an agglomerate is much higher if the same agglomerate is built up of lighter monomers. This would mean higher fragmentation energy and one would expect that the particles would be harder to fragment resulting in bigger grains.

We performed a simplified simulation to be able to answer this question. Only two different monomer sizes are considered here, m and m assuming that half of the mass (or representative particles) belongs to the small monomers, the other half belongs to the big monomers.

One problem arises here with the rolling energy. The rolling energy changes with monomer size, and as our method cannot follow exactly the number of contacts in an aggregate and what kind of monomers are connected, we are forced to use an averaged rolling energy. One has to carefully consider what the average rolling energy should be. In our case, the big monomer is 64 times heavier than the small monomer. Let’s assume that 50% of the mass of an aggregate is built up from small monomers; on the other hand, if we compare the number of different monomers, the small monomers will be 64 times more numerous than the big ones. This means that the contribution of small monomers in the average rolling energy () should be higher. This can be achieved by using the following weighting:

(24) |

where is the rolling energy between monomers with radius , is the rolling energy between monomers with radius .

## Appendix F Conclusions and outlook

We have shown that our representative particle method for aggregation of particles in astrophysical settings works well for standard kernels. It has the usual advantages of Monte Carlo methods that one can add particle properties easily and without loss of computational speed. Moreover, it naturally conserves the number of computational elements, so there is no need to “add” or “remove” particles. Each representative particle represents a fixed portion of the total mass of solids.

Our method may have various possible interesting extensions and applications. Here we speculate on a few of these. For instance, the fact that each representative particle corresponds to a fixed amount of solid mass makes the method ideal for implementation into spatially resolved models such as hydrodynamic simulations of planetary atmospheres or protoplanetary disks. We can then follow the exact motion of each representative particle through the possibly turbulent environment, and thereby automatically treat the stochastic nature and deviation from a Boltzmann distribution of the motion of particles with stopping times of the same order as the turbulent eddy turn-over time. It is necessesary, however, to assure that a sufficiently large number of representative particles is present in each grid cell of the hydrodynamic simulation. For large scale hydrodynamic simulations this may lead to a very large computational demand for the coagulation computation, as well as for tracking the exact motion of these particles. If strong clumping of the particles happens, however, much of the “action” anyway happens in these “clumps”, and it may then not be too critical that other grid cells are not sufficiently populated by representative particles. This, however, is something that has to be experimented.

Our representative particle method can in principle also be used to model the sublimation and condensation of dust grains. If a particle sublimates then the representative particle becomes simply an atom or molecule of the vapor of this process. It will then follow the gas motion until the temperature becomes low enough that it can condense again. Other representative particles which are still in the solid phase may represent physical particles that can act as a condensation nucleus. Finally, in our method the properties of the particle can not only change due to collisions, but we can easily implement other environmental factors in the alteration of particle properties.

There are two main drawbacks of the method. First, it only works for large particle numbers, i.e. it cannot treat problems in which individual particles start dominating their immediate environment. Ormel’s method and its expected extension do not have this problem. Secondly, the method cannot be accelerated using implicit integration, while Brauer’s method can.

All in all we believe that this method may have interesting applications in the field of dust aggregation and droplet coagulation in protoplanetary disks and planetary atmospheres.

## References

## Chapter \thechapter Mapping the zoo of laboratory experiments

Based on ‘The outcome of protoplanetary dust growth: pebbles, boulders, or planetesimals? I. Mapping the zoo of laboratory collision experiments’ by C. Güttler, J. Blum, A. Zsom, C. W. Ormel & C. P. Dullemond published in A&A, 513, 56.

## Appendix G Introduction

The first stage of protoplanetary growth is still not fully understood. Although our empirical knowledge on the collisional properties of dust aggregates has considerably widened over the past years (Blum & Wurm, 2008), there is no self-consistent model for the growth of macroscopic dust aggregates in protoplanetary disks (PPDs). A reason for such a lack of understanding is the complexity in the collisional physics of dust aggregates. Earlier assumptions of perfect sticking have been experimentally proven false for most of the size and velocity ranges under consideration. Recent work also showed that fragmentation and porosity play important roles in mutual collisions between protoplanetary dust aggregates. In their review paper, Blum & Wurm (2008) show the complex diversity that is inherent to the collisional interaction of dust aggregates consisting of micrometer-sized (silicate) particles. This complexity is the reason why the outcome of the collisional evolution in PPDs is still unclear and why no ‘grand’ theory on the formation of planetesimals, based on firm physical principles, has so far been developed.

The theoretical understanding of the physics of dust aggregate collisions has seen major progress in recent decades. The behavior of aggregate collisions at low collisional energies – where the aggregates show a fractal nature – is theoretically described by the molecular dynamics simulations of Dominik & Tielens (1997). The predictions of this model – concerning aggregate sticking, compaction, and catastrophic disruption – could be quantitatively confirmed by the laboratory collision experiments of Blum & Wurm (2000). Also, the collision behavior of macroscopic dust aggregates was successfully modeled by a smooth particle hydrodynamics method, calibrated by laboratory experiments (Güttler et al., 2009; Geretshauser et al., 2009). These simulations were able to reproduce bouncing collisions, which were observed in many laboratory experiments (Blum & Wurm, 2008).

As laboratory experiments have shown, collisions between dust aggregates at intermediate energies and sizes are characterized by a plethora of outcomes: ranging from (partial) sticking, bouncing, and mass transfer to catastrophic fragmentation (see Blum & Wurm, 2008). From this complexity, it is clear that the construction of a simple theoretical model which agrees with all these observational constraints is very challenging. But in order to understand the formation of planetesimals, it is imperative to describe the entire phase-space of interest, i.e., to consider a wide range of aggregate masses, aggregate porosities, and collision velocities. Likewise, the collisional outcome is a key ingredient of any model that computes the time evolution of the dust size distribution. These collisional outcomes are mainly determined by the collision velocities of the dust aggregates, and these depend on the disk model, i.e. the gas and material density in the disk and the degree of turbulence. Thus, the choice of the disk model (including its evolution) is another major ingredient for dust evolution models.

These concerns lay behind the approach we adopt in this and subsequent chapters. That is, instead of first ‘funneling’ the experimental results through a (perhaps ill-conceived) theoretical collision model and then to calculate the collisional evolution, we will directly use the experimental results as input for the collisional evolution model. The drawback of such an approach is of course that experiments on dust aggregate collisions do not cover the whole parameter space and therefore need to be extrapolated by orders of magnitude, based on simple physical models whose accuracy might be challenged. We still feel that this drawback is more than justified by the prospects that our new approach will provide: through a direct mapping of the laboratory experiments, collisional evolution models can increase enormously in their level of realism.

In this chapter, we will classify all existing dust-aggregate collision experiments for silicate dust, including three additional original experiments not published before, according to the above parameters (Sect. H). We will show that we have to distinguish between nine different kinds of collisional outcomes, which we physically describe in Sect. I. For the later use in a growth model, we will sort these into a mass-velocity parameter space and find that we have to distinguish between eight regimes of porous and compact dust-aggregate projectiles and targets. We will present our collision model in Sect. J and the consequences for the porosities of the dust aggregates in Sect. K. In Sect. L, we conclude our work and give a critical review on our model and the involved necessary simplifications and extrapolations.

In Chapter \thechapter (Zsom et al., 2009) we will then, based upon the results presented here, follow the dust evolution using a recently invented Monte-Carlo approach (Zsom & Dullemond, 2008) for three different disk models. This is the first fully self-consistent growth simulation for PPDs. The results presented in Chapter \thechapter represent the state-of-the-art modeling and will give us important insight into questions, such as if the meter-size barrier can be overcome and what the maximum dust-aggregate size in PPDs is, i.e. whether pebbles, boulders, or planetesimals can be formed.

## Appendix H Collision experiments with relevance to planetesimal formation

projectile mass | collision velocity | micro- | collisional outcome | reference | |
---|---|---|---|---|---|

[g] | [cm s] | gravity | (see Fig. 11) | ||

Exp 1 | – | 0.1 – 1 | yes | S1 | Blum et al. (1998, 2002), |

Wurm & Blum (1998) | |||||

Exp 2 | – | 10 – 50 | yes | S1 | Wurm & Blum (1998) |

Exp 3 | – | 0.02 – 0.17 | yes | S1 | Blum et al. (2000), |

– | 0.04 – 0.46 | yes | S1 | Krause & Blum (2004) | |

Exp 4 | – | 7 – | yes | S2 | Blum & Wurm (2000) |

Exp 5 | – | 15 – 390 | yes | B1, F1 | Blum & Münch (1993) |

– | 15 – 390 | yes | B1, F1 | ||

Exp 6 | – | 10 – 170 | yes | S2, S3 | Langkowski et al. (2008) |

– | 50 – 200 | yes | B2, S2, S3 | ||

– | 200 – 300 | yes | S3 | ||

Exp 7 | – | 20 – 300 | yes | S3 | Blum & Wurm (2008) |

Exp 8 | – | 16 – 89 | no | S3 | Güttler et al. (2009) |

Exp 9 | – | 10 – 40 | yes | B1 | D. Heißelmann et al. (in prep.) |

– | 5 – 20 | yes | B1 | ||

Exp 10 | – | 1 – 30 | no | B1 | Weidling et al. (2009) |

Exp 11 | – | 320 – 570 | yes | F1 | Lammel (2008) |

Exp 12 | – | no | F2 | R. Schräpler & J. Blum (in prep.) | |

Exp 13 | 0.2 – 0.3 | – | no | F2 | Wurm et al. (2005a) |

Exp 14 | 0.2 – 0.3 | 350 – | yes | F2 | Paraskov et al. (2007) |

Exp 15 | 0.39 | 600 – | no | S4 | Wurm et al. (2005b) |

Exp 16 | – | 700 – 850 | no | S4 | Teiser & Wurm (2009a) |

Exp 17 | – | 100 – | no | S4 | Sect. H.2.1 |

Exp 18 | – | 10 – | no | B1, S2, S4 | Sect. H.2.2 |

Exp 19 | – | 200 – 700 | yes | S4, F3 | Sect. H.2.3 |

In the past years, numerous laboratory and space experiments on the collisional evolution of protoplanetary dust have been performed (Blum & Wurm, 2008). Here, we concentrate on the dust evolution around a distance of 1 AU from the solar-type central star where the ambient temperature is such that the dominating material class are the silicates. This choice of 1 AU reflects the kind of laboratory experiments that are included in this chapter, which were all performed with SiO grains or other refractory materials. The solid material in the outer solar nebula is dominated by ices, which possibly have very different material properties than silicates, but only a small fraction of laboratory experiments have dealt with these colder (ices, organic materials) or also warmer regions (oxides). In Sect. L.2, we will discuss the effect that another choice of material might potentially have, but as we are far away from even basically comprehending the collisional behavior of aggregates consisting of these materials, we concentrate in this study on the conditions relevant in the inner solar nebula around 1 AU.

Table 1 lists all relevant experiments that address collisions between dust aggregates of different masses, mass ratios, and porosities, consisting of micrometer-sized silicate dust grains, in the relevant range of collision velocities. Experiments 1 – 16 are taken from the literature (cited in Table 1), whereas experiments 17 – 19 are new ones not published before. In the following two subsections we will first review the previously published experiments (Sect. H.1) and then introduce the experimental setup and results of new experiments that were performed to explore some regions of interest (Sect. H.2). All these collisions show a diversity of different outcomes for which we classify nine different collisional outcomes as displayed in Fig. 11. Details on these collisional outcomes are presented in Sect. I.

#### h.1 A short review on collision experiments

We briefly review published results of dust-collision experiments here since these determine the collisional mapping in Sect. I and J. The interested reader is referred to the review by Blum & Wurm (2008) for more information. All experiments are compiled and referenced in Table 1 where we also list the collision velocities and projectile masses, as these will be used in Sect. J. Most of the experiments in Table 1 (exception: Exp 10) were performed under low gas pressure conditions to match the situation in PPDs, and most of the experiments were carried out in the absence of gravity (i.e. free falling aggregates or micro-gravity facilities), see Col. 4 of Table 1. For the majority of the experiments, spherical monodisperse SiO monomers with diameters between 1.0 and 1.9 were used; some experiments used irregular SiO grains with a wider size distribution centered around , and Exp 5 used irregular with monomer diameters in the range .

*Exp 1 – 4:* A well-known growth mechanism for small dust aggregates is the hit-and-stick growth, in which the aggregates collide with such a low kinetic energy that they stick at each other upon first contact without any restructuring. The first experiments to unambiguously show that the hit-and-stick process is relevant to protoplanetary dust aggregation were those by Wurm & Blum (1998), Blum et al. (1998, 2000, 2002) and Krause & Blum (2004). These proved that, as long as the collision velocities for small dust aggregates stay well below 100 cm s, sticking collisions lead to the formation of fractal aggregates. This agrees with the molecular-dynamics simulations by Dominik & Tielens (1997) and Wada et al. (2007, 2008, 2009). The various experimental approaches for Exp 1 – 3 used all known sources for relative grain velocities in PPDs, i.e. Brownian motion (Exp 3), relative sedimentation (Exp 1), and gas turbulence (Exp 2). In these papers it was also shown that the hit-and-stick growth regime leads to a quasi-monodisperse evolution of the mean aggregate masses, depleting small grains efficiently and rapidly. For collisions between these fractal aggregates and a solid or dusty target, Blum & Wurm (2000, Exp 4) found growth at even higher velocities, in which the aggregates were restructure. This also agrees with molecular-dynamics simulations (Dominik & Tielens, 1997), and so this first stage of protoplanetary dust growth has so far been the only one that could be fully modeled.

*Exp 5:* Blum & Münch (1993) performed collision experiments between free falling ZrSiO aggregates of intermediate porosity (, where is the volume fraction of the solid material) at velocities in the range of 15 – 390 cm s. They found no sticking, but, depending on the collision velocity, the aggregates bounced ( cm s) or fragmented into a power-law size distribution ( cm s). The aggregate masses were varied over a wide range ( to g), and the mass ratio of the two collision partners also ranged from 1:1 to 1:66. The major difference to experiments 1 – 4, which inhibited sticking in these collisions, were the aggregate masses and their non-fractal but still very porous nature.

*Exp 6 – 8:* A new way of producing highly porous, macroscopic dust aggregates ( for 1.5 m diameter SiO monospheres) as described by Blum & Schräpler (2004) allowed new experiments, using the 2.5 cm diameter aggregates as targets and fragments of these as projectiles (Langkowski et al., 2008, Exp 6). In their collision experiments in the Bremen drop tower, Langkowski et al. (2008) found that the projectile may either bounce off from the target at intermediate velocities (50 – 250 cm s) and aggregate sizes (0.5 – 2 mm), or stick to the target for higher or lower velocities and bigger or smaller sizes, respectively. This bouncing went with a previous slight intrusion and a mass transfer from the target to the projectile. In the case of small and slow projectiles, the projectile stuck to the target, while large and fast projectiles penetrated into the target and were geometrically embedded. They also found that the surface roughness plays an important role for the sticking efficiency. If a projectile hits into a surface depression, it sticks, while it bounces off when hitting a hill with a small radius of curvature comparable to that of the projectile. A similar behavior for the sticking by deep penetration was also found by Blum & Wurm (2008, Exp 7) when the projectile aggregate is solid – a mm-sized glass bead in their case. Continuous experiments on the penetration of a solid projectile (1 to 3 mm diameter) into the highly porous target (, Blum & Schräpler, 2004) were performed by Güttler et al. (2009, Exp 8) who studied this setup for the calibration of a smoothed particle hydrodynamics (SPH) collision model. We will use their measurement of the penetration depth of the projectile.

*Exp 9 – 10:* As a follow-up experiment of the study of Blum & Münch (1993), D. Heißelmann, H.J. Fraser and J. Blum (in prep., Exp 9) used 5 mm cubes of these highly porous () dust aggregates and crashed them into each other ( cm s) or into a compact dust target with ( cm s). In both cases they too found bouncing of the aggregates and were able to confirm the low coefficient of restitution () of for central collisions. In their experiments they could not see any deformation of the aggregates, due to the limited resolution of their camera, which could have explained the dissipation of energy. This line of experiments was taken up again by Weidling et al. (2009, Exp 10) who studied the compaction of the same aggregates which repeatedly collided with a solid target. They found that the aggregates decreased in size (without losing significant amounts of mass), which is a direct measurement of their porosity. After only collisions the aggregates were compacted by a factor of two in volume filling factor, and the maximum filling factor for the velocity used in their experiments (1 – 30 cm s) was found to be . In four out of 18 experiments, the aggregate broke into several pieces, and they derived a fragmentation probability of for the aggregate to break in a collision.

*Exp 11:* The same fragments of the high porosity () dust aggregates of Blum & Schräpler (2004) as well as intermediate porosity () aggregates were used by Lammel (2008, Exp 11) who continued the fragmentation experiments of Blum & Münch (1993). For velocities from 320 to 570 cm s he found fragmentation and measured the size of the largest fragment as a measure for the fragmentation strength.

*Exp 12 – 14:* Exposing the same highly porous () dust aggregate to a stream of single monomers with a velocity from to cm s, R. Schräpler and J. Blum (in prep., Exp 12) found a significant erosion of the aggregate. One monomer impact can easily kick out tens of monomers for the higher velocities examined. They estimated the minimum velocity for this process in an analytical model to be approx. 350 cm s. On a larger scale, Wurm et al. (2005a, Exp 13) and Paraskov et al. (2007, Exp 14) impacted dust projectiles with masses of 0.2 to 0.3 g and solid spheres into loosely packed dust targets. Paraskov et al. (2007) were able to measure the mass loss of the target in drop-tower experiments which was – velocity dependent – up to 35 projectile masses. The lowest velocity in these experiments was 350 cm s.

*Exp 15 – 16:* In a collision between a projectile of intermediate porosity and a compressed dust target at a velocity above 600 cm s, Wurm et al. (2005b, Exp 15) found fragmentation of the projectile but also an accretion of mass onto the target. This accretion was up to 0.6 projectile masses in a single collision depending on the collision velocity. Teiser & Wurm (2009a, Exp 16) studied this partial sticking in many collisions, where solid targets of variable sizes were exposed to 100 to 500 m diameter dust aggregates with a mean velocity of 770 cm s. Although they cannot give an accretion efficiency in a single collision, they found a large amount of mass accretion onto the targets, which is a combination of the pure partial sticking and the effects of the Earth’s gravity. Teiser & Wurm (2009a) argue that this acceleration is equivalent to the acceleration that micron-sized particles would experience as a result of their erosion from a much bigger body which had been (partially) decoupled from the gas motion in the solar nebula.

#### h.2 New experiments

In this section, we will present new experiments which we performed to explore some parameter regions where no published data existed so far. All experiments cover collisions between porous aggregates with a solid target and were performed with the same experimental setup, consisting of a vacuum chamber (less than 0.1 mbar pressure) with a dust accelerator for the porous projectiles and an exchangeable target. The accelerator comprises a 50 cm long plastic rod with a diameter of 3 cm in a vacuum feed through. The pressure difference between the ambient air and the pressure in the vacuum chamber drives a constant acceleration, leading to a projectile velocity of up to 900 cm s, at which point the accelerator is abruptly stopped. The porous projectile flies on and collides either with a solid glass plate (Sect. H.2.1 and H.2.2) or with a free falling glass bead, which is dropped when the projectile is accelerated (Sect. H.2.3). The collision is observed with a high-speed camera to determine aggregate and fragment sizes and to distinguish between the collisional outcomes (i.e. sticking, bouncing, and fragmentation). The experiments in this section are also listed in Table 1 as Exp 17 to 19.

##### Fragmentation with mass transfer (Exp 17)

In this experiment, mm-sized aggregates of different volume filling factors ( and ) collided with a flat and solid glass target and fragmented as the collision velocity was above the fragmentation threshold of approx. 100 cm s. The projected projectile size and its velocity were measured by a high-speed camera (see Fig. 12). In few experiments, the sizes of the produced fragments were measured for those fragments that were sharply resolved, which yielded a size distribution of a representative number of fragments (the number of resolved fragments varied from 100 to 400). Assuming a spherical shape of the fragments and an unchanged porosity from the original projectile, we calculated a cumulative mass distribution as shown in Fig. 13, where the cumulative mass fraction is plotted over the normalized fragment mass . Here, and are the mass of the -th smallest fragment, and the total mass of all visible fragments and is the total number of fragments. We found that the cumulative distribution can be well described by a power law

(25) |

where and are the mass of the fragments in units of the projectile mass and is a parameter to measure the strength of fragmentation, defined as the mass of the largest fragment divided by the mass of the original projectile. The deviation between data and power-law for low masses (see Fig. 13) is due to the finite resolution of the camera, which could not detect fragments with sizes . In the ten experiments where the mass distribution was determined, the power-law index was nearly constant from 0.64 to 0.93, showing no dependence on the velocity, which varied from 120 to 840 cm s. However, a clear dependence on the velocity was found for the parameter , which decreased with increasing velocity as shown in the inset of Fig. 13. This increasing strength of fragmentation can be described as

(26) |

where the exponent has an error of . The curve was fitted to agree with the observed fragmentation threshold of 100 cm s.

It is important to know that the number density of fragments of a given mass follows from Eq. 25 as

(27) |

and that the power law for this mass distribution can be translated into a power-law size distribution with . This yields values from to , which is much flatter than the power-law index of from the MRN distribution (Mathis et al., 1977), which is widely used for the description of high-speed fragmentation of solid materials. Moreover, this power-law index is consistent with measurements of Blum & Münch (1993) who studied aggregate-aggregate collisions between millimeter-sized ZrSiO aggregates (see Sect. H). Their power-law index equivalent to was , and for different velocities they also found a constant power-law index and a velocity-dependent pre-factor (their Fig. 8a).

While most of the projectile mass fragmented into a power-law distribution, some mass fraction stuck to the target (see bottom frame in Fig. 12). Therefore, the mass of the target was weighed before the collision and again after 19 shots on the same spot. The mass of each projectile was weighed and yielded a mean value of mg per projectile. The increasing mass of the target in units of the projectile mass is plotted in Fig. 14. After 57 collisions, dust chipped off the target, which can clearly be credited to the gravitational influence. For the following measurements we therefore added one projectile mass to the target because we found good agreement with the previous values for this offset. The measurements were linearly fitted and the slope, which determines the mass gain in a single collision, was 2.3 % (S. Kothe, C. Güttler & J. Blum, unpublished data).

##### Impacts of small aggregates (Exp 18)

Using exactly the same setup as in the previous section, we performed collision experiments with very small (20 m to 1.4 mm diameter) but non-fractal projectiles. Those aggregates were fragments of larger dust samples as described by Blum & Schräpler (2004) and had a volume filling factor of . In this experiment we observed not only fragmentation but also bouncing and sticking of the projectiles to the solid glass target. Thus, the analysis with the high-speed camera involved the measurement of projectile size, collision velocity, and collisional outcome, where we distinguished between (1) perfect sticking, (2) perfect bouncing without mass transfer, (3) fragmentation with partial sticking, and (4) bouncing with partial sticking. The difference between the cases (3) and (4) is that in a fragmentation event at least two rebounding aggregates were produced, whereas in the bouncing collision only one aggregate bounced off.

For the broad parameter range in diameter (20 to 1400 m) and velocity (10 to cm s), we performed 403 individual collisions in which we were able to measure size, velocity, and collisional outcome. Examples for sticking, bouncing, and fragmentation are shown in Fig. 15. The full set of data is plotted in Fig. 16, where different symbols were used for different collisional outcomes. Clearly, collisions of large aggregates and high velocities lead to fragmentation, while small aggregates tend to bounce off the target. For intermediate aggregate mass (i.e. g), all kinds of collisions can occur. The background color shows a sticking probability, which was calculated as a boxcar average (logarithmic box) at every node where an experiment was performed. Blue color denotes a poor sticking probability, while a green to yellow color shows a sticking probability of approx. 50 %. We draw the solid lines in a polygon [ cm s, g] to mark the border between sticking and non-sticking as we will use it in Sect. J. For the higher masses, this accounts for a bouncing-fragmentation threshold of 100 cm s at g (Exp 18), and for the lower masses, we assume a constant fragmentation threshold of 200 cm s, which roughly agrees with the restructuring-fragmentation threshold of Blum & Wurm (2000, Exp 4). For lower velocities outside the solid-line polygon, bouncing collisions are expected, whereas for higher velocities outside the polygon, we expect fragmentation. Thus, an island of enhanced sticking probability for – g aggregates at a broad velocity range from 30 to 500 cm s was rather unexpected before. The dotted box is just a rough borderline showing the parameters for which the experiments were performed as it will also be used in Sect. J.

##### Collisions between similar sized solid and porous aggregates (Exp 19)

In a collision between a free falling glass bead of 1 mm diameter and a porous () dust aggregate of 1.5 to 8.5 mg mass, we observed fragmentation of the porous aggregate while some mass was growing on the solid and indestructible glass bead (S. Olliges & J. Blum, unpublished data). In this case, the high-speed camera was used with a 3D optics that allowed the imaging of the collision from two angles, separated by 90. On the one hand this made it possible to exactly measure the impact parameter also if the offset of the two collision partners is in the line of sight of one viewing angle. Moreover, observing the mass growth of the solid projectile is not only a projection in one direction but can be reconstructed to get a 3D measurement. The relative velocity and aggregate size were accordingly measured from the images before the collision while the mass gain of the solid glass bead was measured after the collision. Figure 17 shows a diagram of the volume gain in units of projectile volume (projectile: porous aggregate) over the collision velocity. The three data points are averaged over a number of experiments at the same velocity. The error bars denote the standard deviation of collision velocities and projectile volume, respectively. A clear trend shows that the volume gain of the solid particle decreases with velocity, and we fitted the data points with

(28) |

where is the volume of the glass bead. In this experiment we were not able to measure the size distribution of the fragments because the absolute velocity is determined by the projectile velocity (up to 600 cm s), and the faster fragments were out of the frame before they were clearly separated from each other.

## Appendix I Classification of the laboratory experiments

In this section, the experiments outlined above will be categorized according to their physical outcomes in the respective collisions. In Sect. H, we saw that various kinds of sticking, bouncing, and fragmentation can occur. Here, we will keep all these experiments in mind and classify them according to nine kinds of possible collisional outcomes that were observed in laboratory experiments. These collisional outcomes are displayed in Fig. 11. The denomination of the classification follows S for sticking, B for bouncing, and F for fragmentation. S and F are meant with respect to the target, i.e. the more massive of the two collision partners. We will discuss each of the pictograms in Fig. 11, describe the motivation for the respective collisional outcomes and physically quantify the outcome of these collisions.

(1) Sticking collisions: A well-known growth mechanism is due to hit-and-stick (S1) collisions. Hit-and-stick growth was observed in the laboratory (Blum & Wurm, 2000; Blum et al., 2000) and numerically described (Dominik & Tielens, 1997). Experiments show that the mass distribution during the initial growth phase is always quasi-monodisperse. The evolution of the mean mass within an ensemble of dust aggregates due to hit-and-stick (S1)collisions was calculated to follow a power-law in time, in good agreement with the experiments (Wurm & Blum, 1998; Krause & Blum, 2004). Dominik & Tielens (1997) showed theoretically and Blum & Wurm (2000) confirmed experimentally that small fractal aggregates stick at first contact if their collision energy is smaller than a threshold energy. For higher energies, experiments showed that an aggregate is elastically and plastically deformed at the contact zone (Blum & Münch, 1993; Weidling et al., 2009). This increases the number of contacts, which can then lead to sticking at higher velocities, an effect we call sticking through surface effects (S2). Langkowski et al. (2008) found that sticking can occur for even larger velocities if the target aggregate is porous and significantly larger than the projectile. In this case, the projectile sticks by deep penetration (S3) into the target and cannot rebound simply because of geometrical considerations. This effect holds also true if the projectile aggregate is compact, which has been shown by Blum & Wurm (2008) and further studied by Güttler et al. (2009). In Sect. H.2.1, we saw that the growth of a solid target can occur if a porous projectile fragments and partially sticks to the target surface (S4). This growth mechanism was already described by Wurm et al. (2005b). Teiser & Wurm (2009a) found it to be an efficient growth mechanism in multiple collisions.

(2) Bouncing collisions: If the collision velocity of two dust aggregates is too low for fragmentation and too high for sticking to occur, the dust aggregates will bounce (B1). D. Heißelmann et al. (in prep.) found highly inelastic bouncing between similar-sized porous dust aggregates and between a dust aggregate and a dusty but rather compact target, where 95 % of the kinetic energy were dissipated. Weidling et al. (2009) showed that the energy can effectively be dissipated by a significant (and for a single collision undetectable) compaction of the porous aggregates after multiple collisions (collisional outcome bouncing with compaction (B1)). Another kind of bouncing occurred in the experiments of Langkowski et al. (2008) in which a porous projectile collided with a significantly bigger and also highly porous target aggregate. If the penetration of the aggregate was too shallow for the S3 sticking to occur, the projectile bounced off and took away mass from the target aggregate. This bouncing with mass transfer (B2) was also observed in the case of compact projectiles (Blum & Wurm, 2008).

(3) Fragmenting collisions: Fragmentation (F1), i.e. the breakup of the dust aggregates, occurs in collisions between similar-sized dust aggregates at a velocity above the fragmentation threshold. Blum & Münch (1993) showed that both aggregates are then disrupted into a power-law size distribution. If a target aggregate is exposed to impacts of single monomer grains or very small dust aggregates, R. Schräpler & J. Blum (in prep.) found that the target aggregate is efficiently eroded (F2) if the impact velocities exceed cm s. This mass loss of the target was also observed in the case of larger projectiles into porous targets (Wurm et al., 2005a; Paraskov et al., 2007). Similar to the F1 fragmentation, it may occur that one aggregate is porous while the other one is compact. In that case, the porous aggregate fragments but cannot destroy the compact aggregate. The compact aggregate accretes mass from the porous aggregate (Sect. H.2.3). We call this fragmentation with mass transfer (F3).

These nine fundamental kinds of collisions are all based on firm laboratory results. Future experiments will almost certainly modify this picture and potentially add so far unknown collisional outcomes to this list. But at the present time this is the complete picture of possible collisional outcomes. Below we will quantify the thresholds and boundaries between the different collision regimes as well as characterize physically the collisional outcomes therein.

#### S1: Hit-and-stick growth

Hit-and-stick growth occurs when the collisional energy involved is less than (Dominik & Tielens, 1997; Blum & Wurm, 2000), where is the energy which is dissipated when one dust grain rolls over another by an angle of . We can calculate the upper threshold velocity for the hit-and-stick mechanism of two dust grains by using the definition relation between rolling energy and rolling force, i.e.

(29) |

Here, is the radius of a dust grain and is the rolling force. Thus, we are inside the hit-and-stick regime if

(30) |

where is the reduced mass of the aggregates. The hit-and-stick velocity range is then given by

(31) |

#### S2: Sticking by surface effects

For velocities exceeding the hit-and-stick threshold velocity (Eq. 31), we assume sticking because of an increased contact area due to surface flattening and, therefore, an increased number of sticking grain-grain contacts. For the calculation of the contact area, we take an elastic deformation of the aggregate (Hertz, 1881) and get a radius for the contact area of

(32) |

Here, is the collision velocity, is the shear modulus, and is the reduced radius. We estimate the shear modulus with the shear strength, which follows after Sirono (2004) as the geometric mean of the compressive strength and the tensile strength. These parameters were measured by Blum & Schräpler (2004) to be dyn cm (compressive strength) and dyn cm (tensile strength), so we take dyn cm for the shear modulus, which is consistent with estimates of Weidling et al. (2009).

The energy of a pair of bouncing aggregates after the collision is

(33) |

with the coefficient of restitution . The contact energy of the flattened surface in contact is

(34) |

where is the sticking energy of a monomer grain with the radius . We expect sticking for , thus,

(35) | |||

(36) |

This is the sticking threshold velocity for sticking through surface effects (S2), which is based on the Hertzian deformation, which is of course a simplified model, but has proven as a good concept in many attempts to describe slight deformation of porous dust aggregates (Langkowski et al., 2008; Weidling et al., 2009).

We have to ensure that the centrifugal force of two rotating aggregates, sticking like above, does not tear them apart, which is the case if

(37) |

where is the tensile strength of the aggregate material. The centrifugal force in the worst case of a perfectly grazing collision is

(38) |

where is a conservative estimation for the radial distance of the masses with the tangential velocity . Thus, only collisions with velocities

(39) |

can lead to sticking. For the relevant parameter range (see Table 2 below), the threshold velocity in Eq. 39 is always significantly greater than the sticking velocity in Eq. 36, thus, we can take Eq. 36 as the relevant velocity for the process S2.

We will use this kind of sticking not only within the mass and velocity threshold as defined by Eq. 36, but also for collisions where we see sticking which cannot so far be explained by any model, like in experiment 6 or 18. For all these cases, we assume the porosity of target and projectile to be unchanged, disregarding any slight compaction as needed for the deformation. One exception is the sticking of small, fractal aggregates, which clearly goes together with a compaction of the projectile (Dominik & Tielens, 1997; Blum & Wurm, 2000). In these cases we assume a projectile compaction by a factor of 1.5 in volume filling factor as there is no precise measurement on this compaction.

#### S3: Sticking by deep penetration

If the target aggregate is much larger than the projectile, porous and flat, an impact of a (porous or compact) projectile results in its penetration into the target. Sticking is inevitable if the penetration of the projectile is deep enough, i.e. deeper than one projectile radius. In that case, the projectile cannot bounce off the target from geometric considerations. This was found in experiments of Langkowski et al. (2008) in the case of porous projectiles and by Blum & Wurm (2008) in the case of solid projectiles. The result of the collision for penetration depths is that the mass of the target is augmented by the mass of the projectile, and the volume of the new aggregate reads

(40) | |||||

(41) |

with and being the volume of the projectile and target, respectively. We distinguish between compact and porous projectiles and take the experiments of Güttler et al. (2009) and Langkowski et al. (2008) for impacts into dust aggregates and calculate the sticking threshold velocities.

For compact projectiles, we use the linear relation for the penetration depth of Güttler et al. (2009)

(42) |

where