Université Paris 7 Diderot - Sorbonne Paris Cité
Ecole Doctorale 560 - STEP’UP
“Sciences de la Terre et de l’Environnement
et Physique de l’Univers de Paris”
Thèse de Doctorat
Wind accretion onto compact objects
Ileyk El Mellah
pour l’obtention du titre de
Docteur de l’Université Paris 7 Diderot - Sorbonne Paris Cité
Thèse dirigée par Andrea Goldwurm & Fabien Casse
au laboratoire AstroParticule et Cosmologie
présentée et soutenue publiquement le 7 Septembre 2016 devant le jury composé de :
Président : Pr Stéphane Corbel - AIM CEA Saclay Directeur : Dr Andrea Goldwurm - APC Paris 7 Co-directeur : Dr Fabien Casse - APC Paris 7 Rapporteurs : Pr Maximilian Ruffert - University of Edinburgh Dr Guillaume Dubus - IPAG Examinateurs : Dr Thierry Foglizzo - AIM CEA Saclay Pr Rony Keppens - KU Leuven
Je tiens à exprimer mes plus vifs remerciements à Fabien Casse et Andrea Goldwurm qui furent pour moi des directeurs de thèse attentifs et disponibles malgré leurs nombreuses responsabilités. Leur compétence, leur rigueur scientifique et leur clairvoyance m’ont beaucoup appris. Ils ont été et resteront des moteurs de mon travail de chercheur.
I thank Guillaume Dubus and Maximilian Ruffert for accepting to be the rapporteurs of this work and to review my manuscript. Their insightful comments played a major role in the improvements brought to this document. Merci également aux examinateurs pour leur participation à mon jury de thèse : Rony Keppens pour les enrichissants séjours passés à Leuven à explorer les arcanes du code, Stéphane Corbel pour son mentorat tout au long de ma thèse et Thierry Foglizzo pour avoir ravivé ma curiosité physique en m’ayant accordé d’inestimables séances de réflexion.
I would like to express my deepest gratitude to Saul Rappaport for having granted me a decisive first experience in Research. This year at mit has been the funding step of my career thanks to the incredibly stimulating environment I have found there. Thank you for having been such an inspiring figure Saul, the intelligence, culture and imagination which fuel your work are inexhaustible sources of motivation.
Mes remerciements vont aussi à Peggy Varnière et Fabrice Dodu, pour leur soutien tant professionel qu’amical au quotidien. Le numérique est un art retors au large duquel je n’aurais pas manqué de m’échouer sans leur aide bienvenue.
C’est mû d’une gratitude profonde que je me tourne maintenant vers celles et ceux qui m’ont offert leurs conseils et assistance durant ma thèse : Jérôme Rodriguez, Zakaria Meliani, Héloïse Méheut, Allard Jan Van Marle, Rémi Kazeroni, Frédéric Vincent, Régis Terrier, Alexis Coleiro, Sasha Tchekhovskoy, Geoffroy Lesur, Benoît Commerçon et de nombreuses autres personnes que je m’excuse de ne pas citer ici mais à qui je souhaite dire combien je suis conscient de tout ce que je leur dois. Merci aussi aux relecteurs attentifs du présent manuscrit pour le temps précieux qu’ils m’ont épargné en traquant les coquilles et autres fautes de frappe.
Merci aux personnels techniques, administratifs et sûreté du bâtiment Condorcet pour leur présence, leur réactivité et leur amabilité, à Martine Piochaud pour sa bonne humeur, à Brahim Mazzar pour les échanges conviviaux autour de cafés nocturnes, à Nabouana Cissé pour ses ritournelles matinales, à François Carré, Ribeiro Lima Reinaldo, Joëlle Taieb, Ludovic Davila, Sabine Tesson, pour m’avoir tiré d’embûches administratives inextricables.
Un grand merci à tous les formidables thésards et post-docs, passés et présents, pour avoir fait de la vie sociale à l’apc un plaisir sans cesse renouvelé. Les moments de complicité passés ensemble n’ont pas été pour peu dans ma capacité à mener à bien ce travail de thèse.
Pour leur soutien indéfectible et leur compréhension sans borne pour mes absences et mes écarts, je remercie tout particulièrement mes amis et mes proches sans qui rien de tout cela n’aurait été possible. Des regards bienveillants aux encouragements, je ne sais par quoi commencer ; je n’oublie rien de la tendresse et de la profondeur de nos relations sans lesquels ma vie n’aurait pas la même saveur.
Et à ma bonne étoile, pour m’avoir montré le chemin, merci.
I Physical objects & numerical tools
- 1 Accreting compact objects
- 2 X-ray sources
3 Numerical tools
- 3.1 Conservation laws
- 3.2 Numerical scheme
- 3.3 Splendors & miseries of hpc
II Planar bhl accretion onto a compact object
4 Accretion of zero-angular momentum flows onto an isolated point-mass
- 4.1 Ballistic trajectories
- 4.2 Hydrodynamical intermezzo : the Bondi spherical model
5 Numerical simulations of planar accretion onto a compact body
- 5.1 Hydrodynamical approaches of bhl flows
- 5.2 Numerical implementation
- 5.3 The bhl shock
- 5.4 Accretion properties of the bhl flow
- 5.5 Discussion
III Wind accretion in persistent Supergiant X-ray Binaries
6 Winds of isolated hot massive stars
- 6.1 Continuous radiative pressure
- 6.2 Global properties of a line-driven wind
- 6.3 Dynamics of the wind
- 6.4 Conclusion
7 Binary systems
- 7.1 The Roche model
- 7.2 Mass transfer
- 7.3 Tidal interactions
8 Persistent Supergiant X-ray Binaries
- 8.1 Modelling wind accretion
- 8.2 Numerical launching of the stellar wind
- 8.3 Wind motion in a Roche potential : setting the stage
- 8.4 Structure and properties of the flow at the orbital scale
- 8.5 Wind accretion on stage
- 8.6 Discussion
- 6 Winds of isolated hot massive stars
- Conclusion & perspectives
- A.1 Adiabatic shock jump conditions
- A.2 Absorption in a nutshell
- A.3 Surface on the sphere
- A.4 The WASO interface
- A.5 MPI-AMRVAC
Que les dernières lueurs du jour s’évanouissent sous l’horizon et voilà que surgissent de l’obscurité des myriades d’éclats qui jettent sur le monde une clarté nouvelle. Si d’aventure quelques marginaux s’égarent hors des sentiers battus, l’immense majorité de ces vacillantes lanternes se plaît à embarquer, chaque soir, pour un manège céleste à la mécanique bien rodée. L’apparente rotation solide de la voûte supra-planétaire a longtemps induit en erreur les penseurs qui s’imaginaient le ciel percé de trous donnant sur une immensité baignée de lumière. Il fallut l’audace de quelques uns pour donner du relief à ce dôme d’arrière-plan. Les mesures de parallaxe prenaient soudain tout leur sens et suspendaient chaque étoile dans un espace dont les dimensions frayaient avec l’infini. A la grande toile passive succédait un vide qui insultait tant le finalisme que l’anthropocentrisme : si les étoiles étaient animées d’une existence propre, peut-être n’étaient-elles pas si différentes de ce Soleil qui, déjà, avait ravi à la Terre sa centralité. L’idée d’une pluralité des mondes fît son chemin mais les distances en jeu moquaient nos difficultés à se représenter l’extension du microcosme solaire. Dans les années 1830, Auguste Comte prend à témoin les étoiles pour hiérarchiser l’épistémologie positiviste en une physique universelle et fondamentale d’un côté et une physique incarnée de l’autre :
"Les phénomènes astronomiques étant les plus généraux, les plus simples, les plus abstraits de tous, c’est évidemment par leur étude que doit commencer la philosophie naturelle, puisque les lois auxquelles ils sont assujettis influent sur celles de tous les autres phénomènes, dont elles-mêmes sont, au contraire, essentiellement indépendantes. […] lorsqu’on analyse le phénomène terrestre le plus simple, non seulement en prenant un phénomène chimique, mais en choisissant même un phénomène purement mécanique, on le trouve constamment plus composé que le phénomène céleste le plus compliqué. […] Une telle considération montre clairement combien il est indispensable de séparer nettement la physique céleste et la physique terrestre […]."
L’on peut connaître le cœur des hommes111La physique sociale que Comte développe jette les bases d’une sociologie mécaniste. mais pas la nature intime des étoiles. Quelques décennies plus tard, l’essor de la spectroscopie réfutera cette malheureuse prédiction. Les étoiles se voient parées de traits propres ; à chacune son spectre, sa signature dans laquelle se cachent les affres d’une histoire singulière. Car oui, le siècle qui vit naître l’Histoire222L’on pense ici tant à la naissance de l’historicisme qu’à l’engouement que suscitent les vestiges exposés dans les musées qui ravissent la vedette aux cabinets de curiosités des encyclopédistes du XVIII. interroge la permanence de ces représentants déchus d’un sacré à l’abri de la corruption. Pour la première fois, l’on se demande sérieusement de quel bois se nourrissent ces bûchers célestes. Alors que la seconde Révolution industrielle souligne l’importance stratégique des énergies fossiles, l’opulence des astres suscite des convoitises. Comment donc de si prodigieuses luminosités peuvent-elles êtres entretenues sur des échelles de temps dont la mesure échappe à nos calendriers? Avant même que la Physique nucléaire n’offre une réponse à cette question germe l’idée sublime d’un contournement de la difficulté chronologique : si l’hypothèse ergodique s’est montrée si prolifique en réduisant les moyennes temporelles à des moyennes d’ensemble, pourquoi ne pas voir derrière la diversité des étoiles la manifestation d’une temporalité? Les différences observées seraient alors à mettre sur le compte d’âges différents et, mis bout à bout, les instantanés dessineraient l’histoire d’une étoile qui recouvrerait par là-même son universalité. Si le principe fait florès, il révèle aussi une série d’écarts systématiques, comme autant de trappes par lesquelles se dérobe de nouveau l’étoile pour laisser place à des familles d’étoiles333Via le diagramme d’Hertzprung-Russel et la classification en types spectraux.. Une fois couplées aux découvertes sur les particules et leurs intéractions avec les radiations se tisse la première trame de l’histoire des astres en fonction de leur masse initiale. L’exploit scientifique acte la naissance de l’Astrophysique.
La réconciliation des physiques céleste et terrestre inspire un foisonnement de travaux sur la multitude des corps astraux, d’autant qu’avec la Mécanique Quantique et la Relativité Générale s’ouvrent des portes vers des paysages insoupçonnés. Pendant que la Cosmologie prend son envol, les astrophysiciens s’interrogent sur le reliquat que laisse une étoile consommée. Une croisière et un coup de génie plus tard, la naine blanche fait son entrée au panthéon des corps compacts. Elle inaugure la famille des vestiges stellaires à laquelle viendront se joindre étoiles à neutrons et trous noirs, deux autres compagnons de route à la densité défiant tout concurrence. Si les naines blanches se signalent rapidement dans les observations, les étoiles à neutrons et trous noirs laissent longtemps la communauté perplexe. Comment des objets à l’extension spatiale si réduite voire ravalée pourraient-ils jamais être observés? C’était sans compter l’impact du transfert de masse dans les systèmes binaires et la prodigieuse efficacité du processus d’accrétion sur un objet compact pour convertir l’énergie potentielle gravitationnelle du flot en rayons X444La découverte concomitante des pulsars et des quasars apporte aussi de l’eau au moulin compact.. Les progrès de l’astronomie haute énergie révèlent les premières sources extra-solaires dans les années 1960 et une décennie plus tard sont jetées les bases théoriques de l’accrétion sur objet compact : l’existence des étoiles à neutrons est entérinée et celle des trous noirs est sérieusement envisagée.
L’épopée astrale ne s’arrête pourtant pas là. Bien que l’évolution stellaire suive les tendances esquissées par les modèles, les observations contemporaines soulignent des anomalies systématiques à l’épreuve de la théorie. En particulier, le rôle de la binarité dans le cheminement d’une étoile, de l’ignition des réactions thermonucléaires en son cœur à son existence compacte, reste un sujet d’étude. Les binaires X offrent un précieux instantané de cette évolution binômée. La valse mélancolique qui s’y danse entre une étoile et feu son compagnon réduit à l’état d’étoile à neutrons ou de trou noir, est la scène privilégiée d’un festin gargantuesque. En lieu et place des transferts de masse classiques entre étoiles, l’on trouve un flot qui s’enfonce loin dans le champ de pesanteur de l’accréteur compact avant d’y rencontrer qui d’une magnétosphère, qui d’un horizon des évènements. L’énergie potentielle ainsi perdue sera convertie d’abord en énergie cinétique puis possiblement en radiations responsables des spectres observés.
Seulement, là encore, tous les transferts de masse ne se valent pas. La source de matière a longtemps été incriminée pour expliquer la diversité des profils d’accrétion, menant à une classification en binaires X de faible et forte masse, en référence à la bimodalité des mensurations stellaires observées dans ces systèmes : d’un côté des étoiles de types spectraux F ou plus tardif et de l’autre, des étoiles de plusieurs masses solaires de type O ou B, la première population étant associée à des périodes orbitales nettement inférieures à la seconde. L’on peut montrer qu’une étoile est d’autant plus susceptible de déposer directement sa matière dans le puits de potentiel de son compagnon compact que la période orbitale est faible. En outre, si la situation venait à se présenter dans une configuration où le donneur stellaire est plus massif que l’accréteur compact (généralement de masse inférieure à 2), le transfert serait généralement instable. Ces deux remarques tendent à immuniser les étoiles massives contre l’appétit vorace du vestige compact alors que dans les binaires X de faible masse, la matière stellaire s’écoule volontiers le long d’un canal étroit et vient former un disque mince autour de l’accréteur. Le gaz y spirale lentement en s’échauffant sous l’action d’une viscosité turbulente et atteint des températures dantesques associées à une émission X.
Pourtant, l’observation même de binaires X où l’étoile compagnon est une étoile de forte masse trop profondément enfouie dans son champ de pesanteur pour que l’objet compact ne sirote la matière à même la photosphère montre qu’il existe d’autres modes de transfert de masse. Le scénario privilégié dans ce cas s’inspire de travaux des années 1970 qui soulignent la capacité des étoiles massives à entretenir un flot sortant transonique autrement plus important que la brise émise par des étoiles moins massives comme le Soleil. L’accrétion de masse par l’objet compact peut alors se faire de façon opportuniste compte tenu de l’énergie cinétique importante qu’acquiert le vent durant son lancement : seule une fraction du vent verra son cours sérieusement dévié par l’intervention fortuite du champ de pesanteur de l’objet compact. Il sera alors envisageable pour cette fraction de former un choc autour de l’accréteur qui dissipera suffisamment d’énergie pour que certaines particules de fluide n’en réchappent pas. Les taux de perte de masse des étoiles OB sont estimés à des niveaux si importants qu’en dépit de l’infime fraction de vent finalement capturée, la quantité absolue de masse accrétée par unité de temps peut atteindre des niveaux quasi comparables à ceux des binaires X de faible masse. Mais qu’en est-il de la géométrie du flot accrété, modalité qui conditionne le spectre émis? L’émission de rayons X est-elle associée aux chocs dans le vent stellaire, autour de l’objet compact, au niveau des pôles magnétiques de l’étoile à neutrons? A-t’elle lieu dans un environnement optiquement épais ou mince? Est-elle retraitée par un milieu ambiant, une couronne?
Avec la question de la source d’émission se pose celle de notre capacité à résoudre les différentes structures qui ne manquent pas de se manifester par une émission de radiations dans les binaires X. Toute tentative de résolution spatiale est à proscrire tant la taille angulaire de ces systèmes est faible face aux résolutions angulaires limitées auxquelles nos instruments ont accès à hautes énergies. Restent les résolutions spectrales555Ici, un spectre renvoie à la distribution d’énergie des photons, pas à la transformée de Fourier d’un signal temporel. et temporelles.
La première se justifie dès lors que chaque structure est à une température différente et que l’on est en mesure de différencier rayonnements thermique et non-thermique666Un rayonnement thermique a un spectre de corps noir, centré sur une longueur d’onde, alors qu’un spectre non-thermique est généralement ajusté par une loi de puissance.. Si l’étoile et les régions internes du disque émettent effectivement dans des gammes de longueur d’onde bien distinctes dans les binaires X de faible masse, il est plus difficile de s’assurer de cette séparation dans le cas des binaires X de forte masse où l’étoile est plus chaude et la géométrie du flot accrété, plus incertaine. De surcroît, d’autres structures tel qu’un jet peuvent aussi contribuer au spectre dans des intervalles et proportions variables. Parce que le spectre observé est intégré sur une zone de l’espace où cohabitent une multitude de structures aux propriétés d’émission parfois semblables, il n’est pas toujours possible d’identifier chacune des structures présentes. Une approche empirique largement répandue consiste à juxtaposer des zones d’émission pré-définies (couronne, disque, point chaud, etc) dont on ajuste les paramètres pour épouser le spectre observé. Le grand nombre de variables en jeu et l’absence de justifications physiques au choix de ces structures rend l’exercice périlleux ; difficile de s’assurer que la combinaison identifiée soit unique ou que l’ensemble des possibilités afférentes au cas étudié ait été largement exploré sinon circonscris. L’explication est alors valable mais pas forcément satisfaisante si un modèle physique en amont ne motive pas la présence des émetteurs nécessaires pour reproduire le spectre. Il arrive que la robustesse des ajustements ne soit pas tant gage de validité que de dégénérescence : à trop étoffer le procédé l’on désengage la cause. L’on verra dans un instant le rôle privilégié que peuvent jouer les simulations numériques dans la résolution de cette tension épistémologique.
Le fait même que la question de la résolution temporelle puisse se poser dans les binaires X reflète la fascinante richesse des phénomènes à l’œuvre dans ces systèmes. Au Cosmos immuable des Anciens se substituent avantageusement des objets dont la diversité n’a d’égale que la variabilité. Les binaires X présentent tout un spectre de variations photométriques et spectroscopiques, des échelles séculaires associées aux évolutions stellaires (e.g. expansion de l’enveloppe ou changement de température effective de la photosphère) et orbitales (e.g. circularisation ou changement de la séparation orbitale) à des échelles dont la durée est suffisamment courte pour qu’il s’agisse vraisemblablement d’évènements d’extension spatiale comparable aux dimensions de l’accréteur compact777Dans la mesure où la vitesse de la lumière fixe une limite supérieure aux vitesses de propagation d’une information susceptible de mener à une variation globale des propriétés d’émission, l’on peut évaluer la dimension maximale d’une zone présentant une variation temporelle de temps caractéristique via ., possiblement dans son voisinage. La caractérisation de la variabilité temporelle accessible aux missions d’observation est un enjeu qui peut nous permettre de différencier des structures de taille et d’inertie importantes et dont les propriétés sont régulées par les paramètres généraux du système et des structures plus modestes, peut-être multiples et dues au déclenchement d’instabilités locales d’ampleur raisonnable. La physique non-linéaire qui sous-tend l’évolution des instabilités s’illustre par sa capacité à coupler diverses échelles d’espace et de temps pour alimenter l’émergence voire l’entretien888Par instabilités nous entendons aussi les cycles limites tel que l’oscillateur de Van der Pol, le cycle proie-prédateur de Lotka & Volterra ou, possiblement, le cycle Q dans les diagrammes intensité - dureté de certaines binaires X. d’un phénomène à même de modifier les modalités d’expression du système. La multiplicité des couplages en jeu et la difficulté - voire l’impossibilité - d’expliciter des solutions analytiques menacent là aussi notre habilité à produire des explications satisfaisantes i.e. intelligibles, causales et reproductibles999Ces trois conditions guarantissent en principe la falsifiabilité du propos développé : si le cheminement causal est déterministe (éventuellement au sens statistique) et suffisamment détaillé pour pouvoir être suivi et critiqué de proche en proche (intelligibilité), alors il est possible d’identifier des conséquences secondaires du raisonnement principal. L’invalidation de ces dernières entraînerait, par contraposée, l’invalidation des prémisses..
Ce travail de thèse s’attache à l’étude des propriétés du processus d’accrétion d’un flot à grande vitesse101010Appelé “wind accretion” dans la suite de ce manuscrit. sur un objet compact ainsi qu’au couplage de ce phénomène avec les propriétés stellaires et orbitales dans une binaire X où le vent provient d’une supergéante OB. La superposition commode entre classification selon le mode de transfert de masse ou selon le type spectral du compagnon a montré ses limites : des supergéantes OB présentent des flots d’accrétion focalisés sinon amplifiés en direction de l’accréteur et des étoiles sur la séquence principale alimentent un compagnon compact via leurs vents. Dans ce manuscrit, l’on s’attache à construire une description homogène des flots d’accrétion, indépendante de la distinction entre accrétion par vent ou par remplissage de lobe de Roche. La question de la formation et des caractéristiques d’un disque d’accrétion peut alors se poser sans présumer du mode de transfert de masse. L’approche que nous adoptons est semi-analytique avec des modèles dont la conception obéit à la fois aux principes physiques et aux contraintes numériques. Compte tenu des nombres de Reynolds élevés auxquels nous sommes confrontés, l’usage de l’expérimentation au sens usuel n’est pas immédiat. Nous lui préfèrerons l’expérimentation numérique (ou simulation) qui repose sur les capacités technologiques et algorithmiques du calcul haute performance pour produire des images fidèles de solutions physiques inaccessibles aux manipulations expérimentales directes. Si le recours à l’univers symbolique des nombres nous libère des contraintes matérielles de la paillasse, il nous astreint à suivre un code, un programme dont le langage imprègne non seulement le propos final mais aussi la pensée en amont ; l’on ne construit pas un modèle de la même manière selon qu’il a vocation à être éprouvé et exploré dans un processeur ou un accélérateur. Se laisser griser par le potentiel démiurgique des simulations numériques, c’est prendre le risque de perdre le lien explicatif avec le sujet physique en produisant des résultats qui n’ont plus qu’eux-mêmes pour objet.
Dans un premier temps, nous présentons une représentation numérique de l’écoulement de Bondi-Hoyle-Lyttleton sur un vaste éventail d’échelles spatiales, du sillage choqué au voisinage de l’objet compact (part II). La cohérence physique des résultats nous amène ensuite à introduire des effets orbitaux que l’on trouve dans les binaires X et le lancement du vent stellaire par une supergéante OB (part III). L’élaboration d’un modèle élémentaire nous permet d’explorer les configurations possibles et d’expliciter les paramètres physiques essentiels d’un tel système. Des tendances quant à la géométrie et à l’ampleur du flot d’accrétion apparaissent alors.
Part I Physical objects & numerical tools
In this preliminary part, we introduce basic notions about the definition of a compact object and what makes its gravitational influence so special compared to other bodies of similar mass. Ultimate stages of the stellar evolution, compact objects harbor physical conditions on the fringe of current fundamental laws at our disposal. Those cheeky collapsed remnants in an expanding Universe set among the privileged stages to enlarge the realm of our imagination. They bridge the gap between the so-called "two infinities" with macroscopic objects displaying quantum behaviors such as neutron stars. For lack of an adapted theoretical framework and of accessible analogous objects to rely on, the way matter is organized in those objects is still subject to speculation. Even more challenging is the physical censorship which sets the close vicinity of a black hole out of straightforward observational reach : a major technological leap forward would not be enough to make the contents of an event horizon directly observable.
This pessimistic statement can however be bypassed if one agrees to appeal to an intellectual work-around, similar to the one which formerly unveiled the physical origin of Thermodynamics. Unsatisfied with the answers of the XIX century Physics, the atomists with Boltzmann ahead resorted to a discrete representation of fluids, in spite of the impossibility for direct observations to back up the atomic hypothesis. The reluctance of the community to corpuscular111111Sealed by the accomplishments of Wave Optics and later on, Electromagnetism. approaches was reinforced by the apparent renouncements it seemed to involve : the numbers of particles at stake in available systems were so large that kinetic theories would bar Newtonian Mechanics from being of any concrete help. Statistical Physics could offer distributions but no singular prediction. The indeterministic cat121212In this case, it is an indeterminism introduced by the modeling : the dynamics of a given particle is fully determined by the equations and the initial conditions but our effective incapacity to follow it makes any singular prediction about this particle meaningless. A few decades later, another cat would engrave indeterminism in the physical laws themselves. was among the positivist pigeons… The epistemological profitability of the atomic approach forgave this apostasy afterwhile.
The observational inaccessibility of compact objects has fueled the inventiveness of modelers, theoreticians and observers for decades in an attempt to grasp their surroundings. Numerical physicists can now join the struggle by making a previously chimeric dream come true : designing essential ersatzes which encapsulate all the potentialities set by the physical laws encoded, and delegating to the computing facilities the responsibility to deploy a contingent seed into an effective system. Modern supercomputers enable us to push computing limits above levels unthinkable a decade ago. The scope of application of numerical simulations expands day after day and brings up fundamental questions about the very object of Science. The post-Newtonian theories of Physics plunged the observer into the formalism : by making the measurement a physical problem, Quantum Mechanics seized a notoriously informal domain, while General Relativity was shaking the convenient absoluteness of static space and time. Numerical simulations involve the physicist a step farther by making her summon virtual avatars of the world she intends to understand. The technical core of this PhD thesis thus deserved at least an overview to familiarize the reader with the everyday concerns of Numerical Astrophysics.
In the first chapter, we describe the intrinsic properties of accreting compact objects and illustrate their capacity to convert gravitational potential energy into radiation. It will be the occasion to shed some light on the privileged status of high energy astronomy to observe those systems. We then proceed with a description of several environments in which those objects are found, with an emphasis on X-ray binaries, the astrophysical counterpart of the model developed in the last part of this manuscript. The evolutionary questions pertaining to compact objects are postponed to Chapter 7 where we address the formation of X-ray binary systems. For the moment, we embrace a static approach to describe compact bodies, with an emphasis on their generic properties in nominal regimes whose characteristic drift time is large compared to the observational times of X-ray binaries. The numerical tools used to investigate those systems are briefly discussed in the last Chapter of this first part.
Chapter 1 Accreting compact objects\minitoc
1.1 Compact objects
The theory of Relativity, glimpsed by Lorentz, Minkowski & Poincaré at the turn of the century, sketched by Einstein in 1905 and firmly established in 1915 by Einstein & Grossmann111Who brought a decisive Mathematical support to extend Special Relativity (sr) into General Relativity (gr) in the early 1910’s., obeys a correspondence principle : it retrieves the results derived and experimentally not invalidated222Given the levels of precision which can be reached in a given context and in the meaning of Popper’s falsifiability theory. in the Newtonian framework, which justifies the use of the latter in the physical conditions qualified as "Newtonian"333The term "classical" is usually reserved to an opposition with a "quantum" framework. In this sense, gr is a classical theory of Physics.. However, because it takes the transformations of the electric and magnetic fields by change of frame at its word444See Semay and Silvestre-Brac (2010) for a wonderfully elegant derivation of Lorentz transformations from symmetry principles and the need for causality only. (sr) and promotes space and time to the status of physical dynamical objects555To quote Jean-Marc Lévy-Leblond, gr would be better called "Chronogeometry" (Lévy-Leblond, 2001). (gr), Relativity enlarges the scope of our understanding of phenomena occurring in "relativistic" physical conditions namely :
as the relative velocities at stake approach the speed of light666As explained in Semay and Silvestre-Brac (2010), this maximum velocity is actually the one of a mass less particle. Given the very constraining limits on the mass of the photon (Wu et al., 2016), consistent with a zero mass, we can afford to call it the speed of light. (see e.g. the jets of GRS1915+105 Mirabel and Rodríguez, 1994).
as the interaction potential energy between components approach the rest mass energy of those components (nuclear Physics and neutron stars for example).
as the thermal energy density approaches the rest mass energy density (e.g. the Laser Mégajoule with plasmas of temperature a few 100MK).
as the gravitational potential differences (within the system considered or between an observer and the subject) approach (e.g. precession of Mercury on long periods and the vicinity of compact objects).
All those situations can be understood as energetic comparisons with respect to the rest mass energy of the physical system considered.
In the physical topics covered in this PhD, gravity will play the leading role which makes the latter interpretation the most insightful. If we set the potential at infinity to zero, the difference between the Newtonian gravitational potential777Whose expression is obtained by solving the differential form of Gauss’ law for gravity outside a spherical distribution of mass density , of total mass and of spatial extension : (1.1) at the surface of a spherical astrophysical body and of an observer at infinity is given by, when compared to :
where and are the mass and the radius of the body respectively, the gravitational constant and is a dimensionless number called the compactness parameter : in agreement with the aforementioned exclusive regimes of application of Relativity, the closer gets to 1, the more pressing is the need for a relativistic framework. Note that relies on a non relativistic expression of the gravitational potential but it can be shown that below moderately high values of the compactness parameter (10%), its value is essentially unaltered by the use or not of Relativity888Beware, this comment must be considered with precautions when one gets interested in a secular effect of gr like the precession of Mercury : the instantaneous influence of gr over a Newtonian approach is vanishingly small but once integrated over a large number of dynamical periods, it plays a decisive role.. We can also interpret this parameter in terms of velocities by considering the Newtonian escape velocity at the surface of the body with respect to the speed of light :
which motivates the introduction of a characteristic radius999It is also the radius of the event horizon of a black hole (bh). The radius of the bh itself is set, by convention, to such as for a bh., the Schwarzschild radius , such as for , no particle, massive101010By "massive" we mean "which has a measured mass strictly positive". or not, can escape the body since :
In 1.2.1, on the occasion of a discussion of the accretion phenomenon, we provide an energetic interpretation of the compactness parameter.
|Object||Radius (km)||Mass ()||Compactness|
|Stellar mass bh||50||17||1|
|Electron111111Where we considered the classical radius obtained by balancing the rest mass energy with the electrostatic binding energy||10||4.610||10|
Concerning the typical numerical values of , we provide a set of evaluations in Table 1.1 for different physical objects. From now on, any system with will qualify as "compact". This restrictive definition121212Usually, we start to speak about compact bodies when the degenerancy pressure starts to play a role, for , which makes the white dwarfs (in cataclysmic variables for instance) belong to this category. confine the compact objects to neutron stars and black holes131313More exotic theoretical objects such as boson stars (Grandclément et al., 2014) can also be considered as compact objects., the two main kinds of accretors studied in this PhD thesis. If those objects themselves must be studied through the prism of gr, their surroundings do not always require a relativistic treatment per se. Sometimes, a pseudo-Newtonian approach can be sufficient to grasp preliminary features. A handy way to implement this intermediate framework141414Initially introduced to model a Schwarzschild black hole but extensions to a rotating black hole exist (see references in the review by Abramowicz and Fragile, 2013). is to write the gravitational potential produced by a point mass with the Paczyński-Wiita potential (Paczyńsky and Wiita, 1980) :
where is a degree of freedom which stands for a domain of extension of the relativistic regime, of the order of the Schwarzschild radius. The associated force is still centered and conservative.
All other forces neglected, any massive system is doomed to collapse over a dynamical timescale under the action of gravity. The shrinking of the radius at a constant mass makes the compactness parameter rise and it soon reaches relativistic values ( 1). The characteristic duration of this collapse for a system of mass and initial radius and density and respectively can be derived using Kepler’s third law applied to a test-mass on a degenerate orbit of eccentricity almost 1 and of semi-major axis :
As visible in Figure 1.1, this dynamical time is always much smaller than the actual life expectancy of the system considered151515Apart maybe for collapsing molecular clouds with densities of a few millions atoms per cubic centimeter, where the free-fall time is of the order of a few hundreds of thousands years.. For the sake of the Newtonian world, the intercession of (effectively) repulsing forces guarantees the existence of a broad range of intermediate equilibria not doomed to enter the relativistic realm within a dynamical timescale. For instance, stars can maintain as long as the thermonuclear fusion in their core sustains thermal (or, for the most massive ones, radiative) pressure to counterbalance gravity. Each time it runs out of nuclear fuel, after a laps given by the nuclear timescale161616Which can be computed by estimating the efficiency of, e.g., the proton fusion and the rest-mass of the combustible (typically 10% of the total mass) with respect to the stellar luminosity. The former requires to follow Gamow’s elegant computation of the tunnelling effect associated to the fusion between protons - see, for the German-speaking people Gamow (1928) and for the French-speaking people, Daigne (2015)., the core contracts (while the outer layers expand) and the higher temperature reached triggers the ignition of a larger atomic number element171717See Aston’s packing fraction curve to quantify the energy released (or needed beyond Iron) by the fusion of different nuclear elements., buying some additional time to the star181818Stellar evolution, albeit a fundamental motivation of this work and a permanent interpretation background of our results, will not be addressed in detail in this manuscript. The interested reader can get more details in the comprehensive manual by Prialnik (2009).. At some point though,the core reaches the electronic (for wd) or neutron (for ns) degeneracy limit when the distance between particles becomes of the order of the thermal De Broglie wavelength. If the electronic pressure of degeneracy can take over and stabilize the wd, for ns, it takes the strong interaction between baryons to compensate for the gravitational field. As the outer layers bounce back on the surface of the newly formed ns, they are ejected at important speeds : a core-collapse supernova occurs. As specified in Table 1.1, the radii which result from these combined actions191919Other forces can also come into play like the centrifugal force for rapidly rotating stars (Maeder, 2009). are above the Schwarzschild radii of the objects (i.e. ) ; the stellar structure for instance can be appreciated up to a sophisticated level of understanding without a relativistic description. Nevertheless, if the mass of the collapsing core is too important, the strong interaction between baryons is not enough to stop gravity from bringing the compactness parameter to fully relativistic values : a black hole is formed, surrounded by an event horizon202020The event horizon is an exclusive property of black holes (see the cosmic censorship hypothesis which bans the possibility of a naked singularity - i.e. a singularity without event horizon). Visualizing it to confirm the bh nature of the bh candidates - by opposition to boson stars e.g.- is one of the main goals of the contemporary Event Horizon Telescope (Ricarte and Dexter, 2014) and gravity mission (Eisenhauer et al., 2011)..
To summarize, in a final stand (whose modalities strongly depend on the mass of the star but also on its metallicity, its angular momentum…), the exhausted star collapses into either :
a white dwarf for initial stellar masses below 10
a black hole for initial stellar masses above 15-40
1.1.3 Population synthesis
Computing the expected populations of ns and bh in an environment (globular clusters, galactic center, etc) given the prior of its statistical properties (e.g. mean particle density, temperature, etc) requires preliminary insights on the following astrophysical questions.
As described in the previous section, we have at our disposal theoretical and numerical representations of the ultimate stages of stellar evolutionary tracks. However, the levels of accuracy we can currently reach leaves room to large212121With respect to the observational capacities. uncertainties on the properties of the progenitor. The question "which star yields which compact object?" must be answered with more robustness and precision to extrapolate the fate of a given set of stars.
If the initial stellar mass is the key parameter to appreciate stellar evolution with a precision of an order-of-magnitude, observations highlighted systematic biases from a population to another222222For instance between stars of the Milky Way and the peripheral galaxies smc (Small Magellanic Cloud) and lmc (Large Magellanic Clouds), the latter two displaying lower metallicities than our Galaxy (Wood et al., 1998). (see e.g. Demory et al., 2009, for low mass stars). It made the case for additional not-so-hidden parameters in the stellar models such as the chemical composition, the angular momentum, the magnetic field, the binarity, etc. In particular, for high mass stars (i.e. above a few solar masses), the mass is believed to significantly decrease from the initial one over the stellar lifetime232323In spite of the dramatically shorter life expectancy of massive stars. which itself must be narrowly determined to know when the compact object forms. Indeed, while the Sun emits a thermally driven wind (see Parker’s model described in Lamers and Cassinelli, 1999) at a rate of yr, massive stars undergo radiatively driven winds with much larger mass loss rates (see Chapter 6). This very non conservativeness of the mass of stellar bodies among the most likely to give compact objects emphasizes the need to surpass the classical models ; describing the collapse of a star takes a reliable stage.
Initial Mass Function
The mechanism which leads from a cold and dense molecular cloud to a proto-star conditions the Initial Mass Function (imf), the initial mass distribution of a stellar population just formed - see the recent review by Kroupa et al. (2011) and the widely used imf given by Chabrier (2003). The underlying fragmentation of the turbulent interstellar medium is an ongoing topic of research which makes the most of multi-scale and multi-Physics simulations (Abel et al., 2000). It is believed that this imf, which nowadays strongly favours low-mass stars in the Milky Way242424 97% of presently formed stars are below the threshold of 10 to ultimately form a ns (and, a fortiori, a bh)., was different in the past with a bias towards much larger initial masses252525With gravitational collapses of 100 stars being possible physical sources of some of the observed hypernovas. (Schneider et al., 2002).
Stellar Formation Rate
The Stellar Formation Rate (sfr) indicates how fast free gas in the interstellar medium is converted into stars. Similarly to the imf, it can be constrained through the combined investigation of large scale numerical simulations (Vogelsberger et al., 2014) and galactic observations. Starburst galaxies suggest that this rate can punctually skyrocket, for instance due to galactic mergers (Genzel et al., 2010a).
Neutron stars are compact remnants of the Iron core of massive stars whose collapse was stopped by the disruption of the nuclei into nucleons, the subsequent neutronization of the protons by electronic capture and the resulting repulsive force between baryons262626Related to the Pauli exclusion principle since baryons belong to fermions (Martin, 2007).. The average mass density of those objects exceeds the nucleus mass density (a few 10gcm) and the compactness parameter reaches a couple of 10%. Given the extreme conditions associated to this macroscopic object whose structure must be studied using Quantum Chromodynamics in a relativistic framework, the equation of state of the ultra-dense matter they contain (well approached by a barotropic one) is still an active field of research. A strong observational constrain on the equation of state is the maximum mass of the neutron star (Chamel et al., 2013). Indeed, like the Chandrasekhar limit for wd, there is a mass above which stable ns can not exist. The first computation of this upper limit was made by Tolman, Oppenheimer & Volkoff in 1939 (the tov limit) and although it gave a value too low to validate their model, it paved the way to more elaborated models of the interactions at stake in this exotic state of matter. Each of them gave birth to a mass-radius relationship (some of them being represented in Figure 1.3) and more importantly, to a maximum mass which can be confronted to observations. The maximum masses derived all lie approximately within 1.6 and 3; any body heavier than the latter is thus considered to be a black hole candidate272727By the denomination "candidate", we want to emphasize the fact that the exclusive property of bh, the existence of an event horizon, has not been confirmed yet for any of the available candidates. However, the recent detection of two gravitational wave signals in spectacular agreement with the signals expected for a binary bh coalescence provides strong if not definitive support in favor of the existence of black holes (Abbott et al., 2016a, b)., not a neutron star.
If ns have an initial surface temperature much higher than the stellar photospheres, they quickly cool down (Figure 1.2, see also Zavlin, 2007, for a review on thermal emission from isolated neutron stars). Due to their small radii ( 10km), their glare does not carry very far : the ns which are detected only from their surface thermal emission all lie within a few 100pc. Indeed, let us take, as an indicative guideline, the sensitivity of a state-of-the-art satellite as Kepler, although its nominal waveband is not the one adequate for the observation of a hot young neutron star. The Kepler satellite observes up to an apparent magnitude282828With the flux of Vega of the order of 2 ergscm to scale it. of 15, as a reference for the bolometric limit, it corresponds to a conservative292929In practice, for low flux objects, the issue is not the detectability itself but the physical back and foreground luminosity levels (the "blends") : the lower the luminosity of the source, the larger the relative contribution of the neighbourhood along the line-of-sight. flux detectability limit of approximately 10 ergscm. Given the Figure 1.2, it means that we can only detect very young (< 10,000 years) neutron stars at a kiloparsec, but we can go up to one million years at 100pc. We count approximately 10 objects detected only with their surface thermal emission (including the so-called "Magnificient Seven"). Most isolated ns are actually observed thanks to their large magnetic field and rotation frequency, and are called radio pulsars303030PSR B1919+21, a.k.a. LMG-1 (for Little Green Man - 1) being the first pulsar recognized as such. The pulsar lying at the core of the Crab Nebula is probably one of the more studied of them.. If the magnetic and rotation axis are misaligned, the beamed313131Along the magnetic axis. light emitted by the rotating dipole will sweep a specific circular region of the sky, analogous to the way a lighthouse sweeps its beam of light around in a circle. It taps the kinematic energy of rotation of the solid body and induces a magnetic braking which is measured and consistent with this scenario (see Figure 1.4). Thus, isolated neutron stars323232See Mereghetti (2010) for a review on isolated neutron stars. do not have a rich enough environment to interact with to make the measure of decisive parameters such as their mass robust.
Only a fraction of ns are expected to be found orbiting a stellar companion. Indeed, when the neutron star forms, the departure from isotropicity of the explosion333333See Scheck et al. (2004) and Scheck et al. (2006) for two-dimensional numerical simulations of neutrino-driven core-collapse supernovae and Wongwathanarat et al. (2010) and Wongwathanarat et al. (2013) for an extension to three-dimensional simulations. provides a non zero kick velocity to the compact object with respect to its companion, possibly large enough to disrupt the binary system (Tauris and Takens, 1998, Janka, 2013, Grefenstette et al., 2014, Hirai et al., 2014). The spatial distribution of isolated pulsars in the Milky Way, with a larger dispersion around the Galactic plane than the gas and population I stars, supports this scenario. The couples which survive the supernova explosion are goldmines since the mass transfer and orbital modulation of the emission are additional phenomena where the neutron star properties play a major if not a leading role : it becomes possible, provided the influence of the plentiful parameters can be disentangled, to trace back, for instance, the mass of the neutron star.
Eventually, as mentioned earlier, neutron stars present huge magnetic fields and short spin periods. Indeed, conservation of angular momentum and the freezing-flux theorem (Shu, 1992) guarantee, provided the fraction of the total mass ejected during the collapse remains modest :
where , and refer to the spin period (associated to a spin angular velocity ), the magnetic field and the radius respectively of either the star (subscript ) or the compact remnant (subscript ). If we take the solar orders-of-magnitude as guidelines with a spin period of 25 days and a polar magnetic field of 1G, applied to a 10 progenitor, we get the following values for the newly formed neutron star :
If the magnetic field is consistent with what is observed for young X-ray binaries pulsars343434Slightly below according to Figure 1.4, which would suggest, for example, larger magnetic fields for more massive stars., the spin period is well below the actual initial spin periods at birth of those objects (of the order of a few 10ms). Actually, with this spin period, the centrifugal force disrupts the body. Indeed, if we compare the gravitational binding energy of an object of mass and the rotational energy353535Obtained using the moment of inertia. , we get the following necessary condition to guarantee the integrity of the body at a fixed angular momentum :
For the aforementioned numerical values applied to a 10 star, the minimum radius is of one thousand times larger than the size of the neutron star. A more realistic approach would have been to consider only the core or the central sphere containing (the future mass of the ns). If we refer to Table 1 of Heger et al. (2005) who estimate the pre-supernova angular momentum of this sphere to 10ergs, we have, if we assume a differential rotation (i.e. ) for the core :
We retrieve a more realistic value of the initial spin period of a ns. A first comment to be made though is that the iron core is believed to follow a more solid than differential rotation ; more importantly, Blondin and Mezzacappa (2007) have shown that angular momentum is redistributed during the collapse because of the Standing Accretion Shock Instability363636See Foglizzo et al. (2011) for a spectacular analogue of this instability in the shallow water context and Kazeroni et al. (2015) for new insights on the spin-up of a ns during core-collapse. (sasi, Blondin et al., 2003).
Concerning the structure of the magnetic field around the neutron star, different models exist (Daigne, 2015). We expect the flow structure to be shaped by the magnetic field once it reaches the magnetosphere of the body, whose radius can be evaluated in the following way (see Frank et al., 2002, section 6.3). Balancing ram pressure (since the inflow is highly supersonic) and magnetic pressure leads to a localisation of the magnetosphere at a radius (a.k.a. the Alfven radius) such as, for a magnetic field of magnitude (and with a literal expression in mksa units) :
where the accretion luminosity , representative of the X-ray luminosity of the system, is introduced in section 1.2.2. Within this region, the dynamics of the flow is dominated by the coupling between the ionized gas and the magnetic field. The plasma is channelled along the field lines which breaks up an eventual geometrically thin and optically thick structure of the flow (see section 1.2.2). The impact of the inflow on the polar caps of the neutron stars obeys column accretion models373737For more details, see the seminal paper by Davidson and Ostriker (1973) along with references in R.-S. Bamberg’s course..
This solution of gr has long been considered as merely theoretical until accretion383838By stellar mass bh such as Cyg X-1 but also by super-massive bh in active galactic nuclei (Beckmann and Shrader, 2012). and later on, the stellar orbits in the vicinity of the Galactic center Sgr A* (Genzel et al., 2010b), brought it back in a more meaningful spotlight. We now believe that bh are the most likely option when a compact object above a few solar masses393939Above the maximum mass of a ns (see previous section). is identified. Stellar mass bh (< a few 10) are the remnants of collapsed cores of massive stars while Super-Massive Black Holes (smbh, with masses above 10) are believed to lie preferentially at the center of galaxies. The formation of smbh, by direct collapse of gas or by agglomeration of stellar mass bh and runaway accretion, is still an active matter of debate and so is the puzzling scarcity of intermediate-mass bh (imbh, van der Marel, 2003) : although none has been definitely confirmed, super-Eddington404040See section 6.1.2 in Chapter 6 for the Eddington luminosity. accreting systems such as Ultra and Hyper Luminous X-ray sources might host the long awaited imbh (Webb et al., 2014). Their unity-value compactness makes a bh a unique laboratory not only to contemplate gr at stake414141The deformation of the Iron 6.4keV emission line for instance (Fabian et al., 2000) encapsulates several effects such as the relativistic Doppler boosting and the gravitational reddening but also the spectacular rotational frame dragging by the spinning bh (Ingram and Done, 2012), a.k.a. the Lense-Thirring effect (Gourgoulhon, 2014). but also to study strong-field gravity and lay the empirical foundations of the next physical theories. The no-hair theorem guarantees that a bh is entirely defined by only three parameters :
its mass. It sets its Schwarzschild radius given by (1.4) and which corresponds to the radius of the event horizon for a non-rotating bh (Schwarzschild metric).
its spin. When it is not zero (i.e. when we deal with a Kerr bh), an intermediate region outside of the event horizon exists called the ergosphere. This domain is believed to play a major role in the launching of the jets observed in quasars and microquasars (see e.g. the mechanism introduced by Blandford and Znajek, 1977).
its electric charge, a parameter often considered as miscellaneous due to the absence of observation of processes responsible for piling up of electric charges in a bh.
Spin measurements, albeit still discussed within the community, have started to blossom (Reynolds, 2014) to unveil the second key parameter of bh 424242Gravitational waves now offer a new way to measure the bh spin (Abbott et al., 2016a, b).. They invite us to think that rapidly rotating434343i.e. those for which the angular momentum is close from a theoretical limit written in the following. bh are not rare. An important feature of rapidly rotating bh which impacts the accretion phenomenon and in particular wind accretion is the existence of an innermost stable circular orbit (isco) whose position differs for a co and a counter rotating disk444444Which is equivalent, from the point of view of the accreted flow, to switch the sign of the bh spin. around a Kerr bh (see Figure 1.5 and the applet designed by Leo C. Stein). Below this orbit, the disk structure can no longer subsist and the flow quickly spirals in. Since most of the light emitted by an accretion disk comes from its inner parts (see section 1.2.2) and if the inner edge of the disk matches the isco454545It is also possible, as suggested by Yamada et al. (2013) to account for the low/hard states of Cygnus X-1, that the inner disk is truncated well beyond the isco which would jeopardize the analysis sketched here., it means that switches between the two regimes of rotation imply observationally distinguishable spectra and luminosity. The main trigger of this swing could be a large scale hydrodynamical instability called the "flip-flop" instability464646Although it is sometimes compared to the oscillations running along a flag or the von Kármán vortex street (Livio et al., 1991), one must keep in mind that in the current case (i) the geometry is spherical and three-dimensional and (ii) the conditions at the surface of the accretor are not trivially reflecting ones (see section 5.2 in Chapter 5)., illustrated in Figure 1.6 (Foglizzo et al., 2005).
1.2.1 The Kelvin-Helmholtz conversion process
The ubiquity of gravitation in Astrophysics promotes accretion to the rank of the unavoidable mechanisms. In this manuscript, we use the word "accretion" to refer to the fall of a continuous medium (gas or plasma) in a gravitational potential produced by an external474747Self-gravity of the accreted flow can be neglected in X-ray binaries. The skeptical reader could verify it by using the equation (5.49) of Frank et al. (2002) to evaluate the mass stored in a disk (for instance) and compare it to the mass of the accretor. massive body. Before considering a proper fluid, we compute the change in the specific484848Applied to an energy, this adjective means "per unit mass" in this manuscript. gravitational potential energy of a test-mass which would go from infinity to the surface of the compact object of radius :
Since the mechanical specific energy of the test-mass, , is conserved, this decrease in the absolute value of gravitational potential energy results in an increase in the specific kinetic energy of the particle of the order of, for a particle with a negligible velocity at infinity compared to the one at the surface of the object :
This rise in specific kinetic energy is of the order of the specific rest mass of test-mass () for a compact object494949Once again, this Newtonian argumentation is qualitatively unchanged in the relativistic framework. : a few 10% of the rest mass of the particle has been converted into kinetic energy, which makes accretion onto a compact object the most efficient energy conversion process505050In comparison, nuclear fusion of Hydrogen into Helium peaks at 0.7% (see the aforementioned Gamow’s derivation).. In a fluid, this sudden surge in velocity will go with a heating of the flow and thus, with a significant emission of light (see section 1.2.2). We then conveniently define the accretion luminosity as the maximum luminosity associated to an object of compactness accreting with a mass accretion rate :
Part of the gravitational potential energy can also be reinjected in the jets (kinetic energy) or "swallowed" by the bh since the energy carried by the flow, as it crosses the event horizon, takes an infinite amount of time to be released for the observer515151Energy is effectively trapped including under its radiative form, because of the relativistic gravitational reddening. (see the Advection Dominated Accretion Flow model for instance with Narayan et al., 1998).
This energy conversion process was first designed well before the relativistic notion of compactness bore any physical relevance. In the XIX century, the rising interest in Geology brought up the question of the age of the Earth and, by then, of the Sun whose formation is believed to be concomitant525252See the Laplace nebular theory of planetary formation (Woolfson, 1993).. Kelvin & Helmholtz suggested that the luminosity of the latter was due to the gravitational contraction of the whole body which provides the following age estimate, for a constant luminosity through time :
where is the gravitational binding energy of the Sun today. For a uniform sphere of gas density and radius , the gravitational binding energy can be derived by considering the successive addition of layers of infinitesimal mass :
Using the virial theorem (Daigne, 2015), it can be shown that up to half of this energy can be radiated as the body contracts535353The other half is added to the internal energy of the body., which gives an estimate of the age of the Earth :
This estimate was already incompatible with the geological and Darwinian timescales required to explain the observations available at that time which were advocating in favour of larger (and eventually, more realistic) ages. Nuclear power turned out, a few decades later, to be a more realistic process to account for stellar luminosities, at least on the main sequence. Indeed, the Kelvin-Helmholtz mechanism is believed to be the main culprit to explain the luminosity of proto-stars545454Before the core reaches temperatures high enough to trigger on thermonuclear fusion (10K for Hydrogen), the cloud keeps collapsing under the action of gravity..
1.2.2 Light emission
We have seen that the luminosity of accreting compact objects could reach ergs (L), provided they were fed significant amounts of gas (yr). We leave the question of the source of matter for Chapter 2 and 7 and portray the case of the steady -disk555555 refers to a degree of freedom of the problem which stands for an effective viscosity due to the turbulence triggered by the magneto-rotational instability for instance (Balbus and Hawley, 1991). It is a necessary feature to guarantee the transport of angular momentum outwards and, by then, the possibility for matter to spiral in. Indeed, molecular viscosity can be shown to not be large enough to account for outwards transport of angular momentum (see section 4.7 of Frank et al., 2002). model introduced by Shakura and Sunyaev (1973) : the angular momentum of the accreted flow is high enough to make the inflow form a geometrically thin and optically thick disk around the accretor. This landscape which serves as a reference for accretion theory will not be described in detail in this manuscript but is precisely documented in Frank et al. (2002). The computation we carry on in this section also owes much to J. N. Winn’s course 8.901 given at mit during the second term of year 2011-2012.
Luckily enough, we will see in a moment that most of the light emitted by this structure turns out to be, when the accretor is a compact object, in the X-ray domain - well above the energy range of the stellar spectrum of the companion in an X-ray binary for instance. The disk itself is not the only structure expected around a compact accretor565656The system can display, among others, jets (Markoff et al., 2003), internal shocks (Cohen et al., 2014), a stellar and a disk wind, a corona or a hot spot in stream-dominated mass transfers., in particular for a low angular momentum flow, nor the only one to radiate high energy photons. However, the -disk model is a convenient framework which proved useful to explain a substantial number of observational configurations and more specifically, the multi-color black body observed in many cases (see next section for synthetic spectra). In Chapter 5, Frank et al. (2002) shows that the effective black body temperature of the disk at a distance of the accretor of mass accreting at a rate is given by :
where is the Stefan-Boltzmann constant and is the position of the inner edge of the disk. That being said, we can start to quantify more rigorously575757In the previous section 1.2.1, we used a test-mass but did not address the flow as a continuous medium. the accretion luminosity associated to this configuration. If the outer edge of the disk is much farther away from the accretor than the inner edge, we have :
where we summed over all the black body rings of infinitesimal surface (since upper and lower faces) and where refers to the outer border of the disk. Without surprise, if the disk extends down to the surface of the accretor (i.e. ), we retrieve the standard expression of the accretion luminosity given by equation (1.15), the factor being due to the fact that we now work on a system which has an internal structure (and energy) i.e. where thermodynamics and the virial theorem preclude that more than half of the variation in gravitational potential energy goes in the bulk motion of the flow. A decisive feature of those discs, at least from an observational point of view, is that most of the luminosity comes from the inner parts of the disk. Indeed, if we compute the ratio of the luminosity being emitted by the disk below a certain radius - i.e. the cumulative luminosity - with respect to the total luminosity, we get :
in Figure 1.7, we notice that wherever the inner edge of the disk is, a fifth of the total light is emitted within 2, half within 4 and 90% within 30.
A classical preliminary remark about the spectral range of the emitted light due to the accretion process is to distinguish the optically thick case, treated in more details below, and the optically thin case. In the latter configuration, where the disk is secondary if not absent, the emission is simply associated to the thermal energy of all the particles (of mean weight per particle of the order of the mass of a proton ) related to the change in gravitational potential energy via the virial theorem :
where is Boltzmann’s constant. As mentioned in section 1.1.4, the diversion of the rotating flow within the magnetosphere of a neutron star can lead to the accretion of an optically thin flow susceptible to account for very high energy photons. This gamma-ray emission is indeed observed in several systems but is not always believed to be the main source of light which often finds its origin in the process we now describe.
Let us compute the spectrum itself of the source585858An advanced way to do what we just sketched here is provided by the code xspec (Arnaud, 1996). assuming that the emitted spectrum at the surface of the disk is not altered by any other structure595959In practice, the emitted photons are reprocessed in the hot corona surrounding the disk. It can be used, once the reflection on the disk is studied, to estimate the inner radius of the disk and, by then, the spin of the bh (Reynolds, 2014).. The radiative theory notions used here are introduced and developed in much details in Rybicki and Lightman (1981). The specific intensity (in ergsHzsr) of the disk surface follows Planck’s black body photon frequency distribution :
where is given by equation (1.19).
The associated specific luminosity (i.e. luminosity per frequency unit, in ergsHz) of an annulus at radius (for both sides) is :
where is the photon energy , is still and :
Spectra corresponding to the typical numerical values for a disk around a ns (of mass 1.5) orbiting a stellar companion (and accreting at a rate of yr) have been sampled and represented in Figure 1.8. The left column corresponds to the case of a disk of smaller radial extension (i.e. lower angular momentum) than the right column where the outer edge is of the order of the radius of the Roche lobe (see Chapter 7). Three distinct regime can be seen : a first fast rise in (corresponding to the low-energy tail of the outermost black body emission at a temperature ), a flat multi-color black body component (made of the piling up of black bodies with temperatures between and ) and an exponential cut-off beyond the energy corresponding to the innermost temperature . Notice that given its low luminosity, the position of the outer edge has little influence on the spectrum (since it does not modify the high energy part) : most of the time, it is not observationally constrained directly with the spectrum (but rather with a host spot or with the theoretical upper limit of 70% of the Roche lobe radius, a.k.a. the disk tidal truncation radius given by Paczynski, 1977). Indeed, if the spectrum is at the same level of specific intensity, the actual luminosity606060i.e. after having integrated over frequencies, much narrower for the radiation emitted by the outer edge. It is a reason why observers sometimes prefer to plot the product of the frequency by the specific flux. is several orders smaller than the portion associated to the high energy part of the spectrum616161Hence a total luminosity in equation (1.20) which essentially depends on the position of the inner edge of the disk, not of the outer edge.. A more visible feature is the cutoff at the inner edge of the disk which moves from 10km ( surface of a ns) on the upper row to 1,000km ( surface of a white dwarf or the ns magnetosphere - see section 1.1.4) on the lower row, where the emission in X-rays is vanishingly small ; since the stellar companion in an X-ray binary emits partly in ultraviolets (especially for a hot companion in a High-Mass X-ray Binary, see section 2.1.2 in the next chapter), it challenges our very capacity to distinguish the contribution of the disk, in addition of being on the whole less luminous according to equation (1.20) for the total luminosity.
A final comment before presenting accreting compact objects in their environment : as suggested by Figure 1.5, if the position of the inner edge of the disk follows the one of the isco, a sudden swing in the angular momentum of the flow which feeds the disk would have dramatic effects around a maximally rotating bh. Indeed, the inner border would then be almost ten times farther away from the accretor when the disk is counter-rotating (with respect to the bh spin) than when it is co-rotating. Photometrically, it would correspond to a luminosity almost ten times smaller626262Like for an eclipsing system though, dilution effects play a role if there are other X-ray sources in the system such as a corona around the disk (which is usually fitted by a power-law) ; as usual, the integrated luminosity is merely a hint of what the full light distribution (i.e. the spectrum) looks like. while spectroscopically, the cutoff drops by a factor of a few for an inner edge rising from 10 to 100kms.
Chapter 2 X-ray sources\minitoc
Historically, the first extrasolar X-ray source discovered by Giacconi et al. (1962), Sco X-1, turned out to be an accreting compact object in a binary system. Isolated, compact objects generally display signals too quiet to be detected with the current technology but in interaction with their environment, a rich phenomenology blossoms. In a first section, we introduce the physical stage which is the central one in this manuscript, the X-ray binaries. The vast family of the X-ray binaries has long ramified into observational and physical branches that we will briefly summarize. The accent will be put on successive dichotomies which include the archetype of systems where wind accretion occurs, Supergiant X-ray Binaries. Concerning the X-ray sources not related to binary systems, the second half of this chapter is devoted to a few astronomical objects susceptible to be well represented by some of the numerical setups we designed and where the present numerical work might be relevant.
2.1 X-ray binaries
2.1.1 The accretor
As explained in section 1.2.2, the accretor has to be a compact object to account for such an emission of high and very high energy photons (i.e. X-rays and beyond). The accreting body plays a role through its gravitational influence and is hardly responsible directly for the radiations we observe : what we witness is the emission of the flow being accreted. It is why it has been suggested that the environment of the compact object could serve as a probe to investigate its Dantean neighborhood and maybe, unveil unexpected behaviour in strong gravity regime. However, such an aim can only be reached if we manage to delimit the variability inherent to the inflow and dig for systematics behind the scenes.
The vast majority of compact accretors are neutron stars111The two first suspected candidates were the accretors in the Roche lobe overflow high and intermediate mass X-ray binaries, respectively Cen X-3 and Her X-1. The few accreting black hole candidates will be mentioned in 2.1.5., a diagnosis which can rely on several arguments. First, many of those neutron stars manifest themselves as accreting pulsars (in particular in high mass X-ray binaries - see section below), not to be confused with radio pulsars which tap their emission energy from the ns rotation. The magnetic field is large enough (typically 10G) to funnel the accreted gas to the magnetic poles where it produces tiny regions of bright and high energy emission (see the accretion columns in Pringle and Rees, 1972, Davidson and Ostriker, 1973). As they spin with the neutron star, since the magnetic field is not aligned with the spin axis, the amplitude of the emission is modulated and pulses at the neutron star spin period appear in the light signal. The magnetic field can even be evaluated, validating the above scenario, by using the cyclotron resonant scattering absorption lines observed in those systems222They are caused by the scattering of hard X-ray photons on electrons whose energy is quantified by the magnetic field according to the Landau levels - see e.g. Wasserman and Shapiro (1983) for the theoretical framework and Walter et al. (2015) for an overview of observations of this feature in Galactic hmxb.. Second, the dynamically deduced masses are consistent with the conclusions drawn by core-collapse models and the maximum masses set by the available equations-of-state for the ultra-condensed matter of ns. Finally, spectroscopic arguments can be used to differentiate an accreting ns from an accreting bh. The former is characterized by a power-law of photon index 0.3 to 2 and a high energy exponential cutoff with cyclotron resonance scattering features at higher energies (Walter et al., 2015).
2.1.2 hmxb & lmxb
The two main families of X-ray binaries depend on the mass of the companion star :
Low mass X-ray binaries (lmxb) : the companion star is of low mass (at most a couple of solar masses but more generally below one solar mass). The corresponding spectral type is A or later.
High mass X-ray binaries (hmxb) : the companion star is of high mass (above several solar masses, usually above 10) The corresponding spectral type is B or earlier.
The lmxb then host stars older than hmxb due to the slower evolution of lower mass stars. While the age of the star is typically of less than 10 millions years in hmxb, lmxb host stars older than a few billions years333See section 7.2 for a more detailed discussion on the secular evolution of a stellar binary system into an X-ray binary. In particular, the remarkable bimodality of the stellar mass distribution of those systems (with few intermediate mass X-ray binaries) is discussed there.. Since the hmxb are younger, the magnetic field of the ns they host (if it is not a bh) is still large. The accretion columns are brighter and more extended and a pulsar is more likely to be detected than in lmxb where the magnetic field is lower.
Concerning their spatial distribution in the Milky Way, hmxb appear very close to the Galactic plane while lmxb have a broader Galactic latitude distribution. If the more venerable age of lmxb partly explains this discrepancy, it is not a sufficient reason as emphasized by Podsiadlowski (2014). Indeed, the main element is the kick provided to the system by the non spherical collapse of the naked Helium core into a neutron star. For systems which remain bounded after the associated Supernova explosion, it results in larger kicks for lower mass systems i.e. the lmxb. In addition, because of the longer lifetime of low mass stars, a lmxb will be able to travel further away than a hmxb. Brandt and Podsiadlowski (1995) showed that X-ray binaries born in the Galactic plane with realistic mass distributions reproduce well the observations when the same ns kick velocity distribution is adopted for all binaries. For the spatial distribution within the Galactic plane, Coleiro and Chaty (2013) used a spectral energy distribution fitting procedure in optical and near infrared to derive the distance and absorption of a statistical sample of hmxb. As visible in Figure 2.1, there is a correlation between the position of the hmxb and the stellar formation zones (e.g. the spiral arms).
Another feature concerns the mass transfer from the star to the compact companion responsible for the X-ray bright emission. Most of the time, the spectrum is softer in lmxb (with photon energies 10keV) than in hmxb ( 15keV) which invite us, according to the arguments developed in section 1.2.2, to support the existence of a robust and large optically thick geometrically thin disk-like structure around the compact object in lmxb, not necessarily in hmxb. The presence of this disk in lmxb is supported by the mechanism of Roche lobe overflow (rlof) described in Chapter 7, more likely to be stable in lmxb than in hmxb. In the latter, wind accretion (see Chapter 8) is believed to be the main culprit for mass transfer, with much lower subsequent angular momentum for the inflow.
Eventually, jets have been observed mostly in lmxb but also in hmxb 444Cygnus X-1, Cygnus X-3 and SS433.. The compact accretor can be either a bh (e.g. GRS1015+105) or a ns (e.g. Sco X-1). Different mechanisms exist to explain those ejections of matter highly beamed and at velocities sometimes of the order of the speed of light : an electromagnetic equivalent of the Penrose extraction process to tap the spin energy of Kerr bh in the ergosphere (Blandford and Znajek, 1977) or a wind from a magnetized accretion-ejection structure (Blandford and Payne, 1982, Casse and Keppens, 2002). Those systems are called microquasars which refers to a subfamily of active galactic nuclei, the quasars.
2.1.3 Sgxb & Bexb
Among hmxb, an additional distinction is made between two main categories depending on the nature of the high mass star (see also the review by Chaty, 2011). Indeed, until the integral satellite555For international gamma-ray astrophysics laboratory, launched in 2002., most hmxb observed hosted a specific kind of B star666More precisely, the star is generally between O9 and B3. characterized by high spin rates, many strong Balmer lines in emission and an infrared excess (Coe, 1999, Porter and Rivinius, 2003, Reig, 2011). They are called Be stars777Where the e stands for emission lines. and the hmxb where they are found, Bexb. The star spins fast enough to get close from its breaking limit and a decretion disk, responsible for the emission lines and the infrared excess, forms in the equatorial plane (see the skecth in Figure 2.3). All Bexb host a ns as an accretor, on an eccentric orbit (0.3), usually inclined with respect to the stellar equatorial plane. Bexb are generally transient sources (with a quiescent level at 10ergs). They display periodic X-ray type I bursts (up to 10ergs) at the orbital period of the orbiting neutron star (bottom panel in Figure 2.3) which can be explained by the passage of the ns through the decretion disk where the mass density is higher. The X-ray luminosity levels reached are consistent with a wind accretion process888Through this manuscript, by wind accretion we mean not only the accretion of a stellar wind but also any mode of accretion where the kinetic energy of the flow with respect to the accretor at large distance of it is important compared to the magnitude of the gravitational potential energy of the compact object at the Bondi radius (i.e. the flow is supersonic at infinity) and compared to the angular rotation speed of the system if it is in a binary. It covers the framework established by Bondi, Hoyle and Lyttleton for instance - see Chapter 4.. However, some of those sources also display non periodic larger bursts (type II), up to an X-ray luminosity of 10ergs, which are believed to feature transient massive accretion of the stellar extended atmosphere in a way reminiscent of the Roche lobe overflow in lmxb 999Indeed, with an eccentric orbit, a star can fill or not its Roche lobe depending on the phase of the orbit.. These events are typically accompanied by large spin-up rates that suggest the formation of a transient accretion disk around the ns (see the correlation in Figure 2.4).
The second family, generally more persistent, contains the archetypes of the wind accreting X-ray sources. The companion star is an evolved OB Supergiant (hence the name Sgxb) and the accretor, on a low eccentricity orbit (0.2), is either a ns (most of the time) or a bh. The orbital periods are shorter than in Bexb (10 days). In Sgxb hosting a ns, a handy tool to differentiate Sgxb and Bexb is the Corbet diagram (Corbet, 1984) which represents the spin period of the neutron star as a function of the orbital period for observed values. Figure 2.4 is an updated version of this diagram for Galactic hmxb, mostly based on the measured gathered in Walter et al. (2015). The main results highlighted by this Figure are the following :
Bexb present a correlation between the orbital period and the spin of the ns which suggests that the angular momentum transfer is efficient and the mass transfer, conservative. It was suggested by Waters and van Kerkwijk (1989) that it was a manifestation of the propeller effect introduced by Illarionov and Sunyaev (1975) : a negative feedback locks the ns spin period such as the corotation velocity at the magnetospheric radius matches the Keplerian velocity. The orbital periods span from a couple of 10 days up to a few hundred days.
the few hmxb undergoing Roche lobe overflow (only one in the Milky Way, Cen X-3) with short orbital periods (see section 2.1.5) host fast rotating pulsars with very short spin periods. An accretion disk is believed to be a favoured transitional structure to guarantee an optimal transfer of orbital angular momentum to the spin of the ns, significantly accelerating it.
the Sgxb systems show no trace of correlation between orbital and ns spin periods, suggesting an inefficient mass transfer mechanism, consistent with wind accretion (highly non conservative). They also have shorter orbital periods, consistent with the secular evolutionary tracks. The ns spin periods are rather long (above 100s).
The position in this diagram is, by itself, an argument to classify a source as a Sgxb or a Bexb. When we differentiate between the two based on the eccentricity of the ns orbit, one has to keep in mind that the circularization of the orbit is an on/off mechanism with a rather sharp threshold (see section 7.3 and Figure 7.15 in particular). Indeed, the capacity of an orbit to circularize within the stellar lifetime (see equation (7.22)) is very dependent on the ratio of the stellar radius by the orbital separation and the stars in Bexb are both smaller and associated to longer periods, which makes circularization over the nuclear timescale essentially ineffective, contrary to Sgxb which turn out to be below the threshold for circularization. However, notice that non eccentric Bexb would lead to truncated decretion disks with lower outer radii and thus, much lower peak X-ray luminosities as the ns intersects a lower density environment (Negueruela and Okazaki, 2000) ; even if they existed, their detection would be observationally challenging.
2.1.4 Classical Sgxb, obscured Sgxb & sfxt
Among the Sgxb are found several classes depending on the time variability of the X-ray emission and the absorption properties. Due to the reduced number of Sgxb, it is the last available classification up to now, with 3 main families : in Figure 2.2, among the 36 confirmed Sgxb in 2015, 11 to 18 are persistent Sgxb (the classical ones), 6 to 9 are highly obscured Sgxb and the rest are mostly SuperFast X-ray Transients (sfxt, Walter et al., 2015).
The latter family presents fast outbursts with rise times of the order of a few ten minutes and with typical durations of a few hours (Negueruela et al., 2006). Their quiescent level is much lower than the other two families, at an X-ray luminosity of 10 to 10ergs (Sidoli et al., 2008), of the order of the observational limit of sensitivity. However, the dynamic range of the flares is also much higher (between 10 and 10), which makes them transiently detectable at X-ray luminosity levels up to 10ergs : since they are in a non detectable quiescent state most of the time, it took a field of view as large as the one provided by the integral satellite to identify those low duty cycle systems, the flares being separated by weeks. The outburst spectra are consistent with accreting ns, even if the possibility that some systems host a bh can not be ruled out when no pulsation has been detected. Their positions in the Corbet diagram (see Figure 2.4) does not follow neither the classical Sgxb nor the Bexb ones, which brings up the question of their evolutionary bonds with those two families. It has been proposed that those flares can be caused by serendipitous accretion of clumps in the wind of the Supergiant stellar companion (Walter and Zurita-Heras, 2007, Ducci et al., 2009), albeit the overdensities alone are not expected to match the high dynamic range factors.
A physical distinction between sfxt and classical Sgxb has been proposed by Negueruela et al. (2008) relying on the relative size of the accretion radius of the compact object (see Chapter 4) relatively to the characteristic size of the clumps in the wind of the Supergiant stars. Because the latter varies with the distance to the star, the orbital separation would make the ns lie either in a homogeneous or in a heterogeneous environment (relatively to its propensity to accrete matter).
The obscured Sgxb are sources with Hydrogen column densities tens times larger than the values associated to other Sgxb, up to several 10cm (Chaty, 2011). Filliatre and Chaty (2004) showed that the absorption of X-rays was due to material in the vicinity of the compact object while the absorption in infrared and optical was linked to a cocoon of dust enshrouding the whole system (Chaty and Rahoui, 2012).
2.1.5 Outliers and bh in Sgxb
Given the large number of parameters able to influence the behaviour of a wind accreting binary, exceptions are the rule. Four main kinds of exceptions deserve to be mentioned (Walter et al., 2015) :
The systems where the stellar parameters do not fit the frame previously sketched for Sgxb. GX301-2 is one of them, hosting a very evolved hypergiant star, possibly on its way to a Wolf-Rayet stage. Due to the enormous mass loss rate of the star, the system can afford a much larger orbital separation than the other Sgxb (see Figure 8.1) and still be detectable. On the other extremity of the range are found more modest main sequence and giant donor stars.
The systems where the stellar wind present atypical properties. For instance, OAO 1657-415 hosts a star with a wind anomalously low at 250kms (van Loon et al., 2001).
The systems where the accretor is not a ns but a bh candidate, which is the likely scenario for a few Sgxb: Cyg X-1 (Orosz et al., 2011), Cyg X-3 and SS433101010In the latter two systems, the nature of the stellar companion is not yet clearly identified ; those systems might be intermediate mass X-ray binaries.. If the gravitational influence of a ns and a bh are extremely similar beyond a few Schwarzschild radii of the object111111Due to their similar compactness parameters - see section 1.1.1., the objects themselves present structural differences121212Especially in terms of self-emission and magnetic properties. which manifest in the accretion (or ejection) mechanism.
The systems where the mass transfer is stream dominated (i.e. proceeds through a Roche lobe overflow like configuration) rather than wind dominated as in most Sgxb: Cen X-3, SMC X-1 and LMC X-4 are the main representatives of this category (Boroson et al., 2000, van der Meer et al., 2007). Those systems often display spectra indicative of the presence of a disk. The main responsible for the scarcity of those systems is probably the expected instability of this mass transfer when a ns accretes matter from a Supergiant companion (see section 7.2.2).
2.2 Other X-ray sources
Other kinds of extrasolar X-ray sources exist which are not found in X-ray binaries but are believed to be linked to accretion onto a compact object. The brief overview which follows intends to remind the reader about the ubiquity of this wider astrophysical situation.
2.2.1 Runaway stellar-mass compact objects
Increasing numbers of hypervelocity stars have been observed, i.e. stars with a velocity in the Galactic rest frame larger than 400kms and doomed to escape the Galactic gravitational potential (Tauris, 2015). Those objects support the existence of runaway compact objects whose formation would have disrupted the binary system and provide the hypervelocity star with a large amount of linear momentum131313Note that hypervelocity stars could also gain their large velocities from tidal disruption of tight binary stars by the Galactic central supermassive compact source, SgrA*. As one of the two stars is captured, the other is ejected at high speed via the gravitational slingshot mechanism (Hills, 1988). (Tauris and Takens, 1998). Bexb with ns orbiting their high mass stellar companion on a modest eccentricity orbit have been observed (Pfahl et al., 2002), pointing towards a specific core collapse process hardly providing any kick to the ns. Nevertheless, we expect most ns to receive a kick velocity at birth of the order of a few hundreds of kilometers per second (Lyne et al., 1994). Monitoring of the proper motion of statistical numbers of pulsars confirmed those figures and a few of them can be linked to a nearby supernova remnant such as IGR J1104-6103 (Figure 2.5). The pulsar then rushes through the interstellar medium at supersonic speeds. In its rest frame, it undergoes a wind accretion mechanism (see Chapter 8) similar to the one of a ns travelling in the high speed wind of its stellar companion, though in a more planar fashion. In the second part of this manuscript, we introduce a planar numerical setup of wind accretion which matches the structure of the incoming flow at large distance of a runaway compact object. However, in the case of a young accreting ns, the pulsar wind, not taken into account in our setup, plays an important role in the shaping of the inflow. To our knowledge, no runaway bh candidate has been detected up to now, possibly due to the lower formation rate of bh compared to ns and to their larger inertia which makes them more difficult to accelerate.
The planarity of this accretion process makes the situation essentially axisymmetric and brings up the question of the longitudinal stability of the accretion wake. The runaway pulsar PSR 2224+65 (Cordes et al., 1993) is associated to a prominent nebula (nicknamed the Guitar Nebula) whose evolution over years has been monitored. The shock in the wake of the accretor has a cone-shaped with a pinch or neck which moves along the wake. Measuring its dimensions and its time evolution is a precious asset to understand the instability at stake in this accreting system : are the variations in the wake mere tracers of inhomogeneity in the interstellar medium or are they related to properties of the accretion mechanism?
2.2.2 Tidal disruption events
Tidal disruption events (or tde) correspond to the disruption of a star passing close enough from a supermassive bh to experience dramatic and irreversible tidal effects (Hills, 1975, Frank, 1978). Because the majority of supermassive bh are believed to lie dormant and starved of fuel, those transient events could shed some light on those quiet objects (Rees, 1988), in particular in galactic centers where the feeding mechanism remains unclear (see section 2.2.3). Massive stars, which develop a bifurcated structure between a dense core and a tenuous envelope, are more vulnerable to tde ; they preferentially provide fuel to accrete for the superemassive bh and could significantly contribute to the flaring activity of the latter. Depending on the amount of angular momentum of the inflow, the flare will decay over years (Bonnerot et al., 2015).
2.2.3 Sagittarius A*
SgrA*, the compact object lying at the center of the Milky Way (Melia and Falcke, 2001, Genzel et al., 2010b), is known to be, nowadays (Clavel et al., 2013), much less luminous than its Eddington limit, possibly pointing to a spherical accretion such as the one portrayed by Bondi (section 4.2). A hydrodynamical model of accretion from the wind emitted by distributed point sources around SgrA* has been developed by Coker and Melia (1997).
2.2.4 Active galactic nuclei
Because Fluid Mechanics is notoriously non trivially scale invariant141414See the non-trivial similitude conversion which leads from an actual ship to its model for example., we focused the discussion on stellar-mass accretors. Nevertheless, in terms of spectral range of emission, we saw that the determining parameter was the compactness of the body and it turns out that most if not all galactic centers are believed to host smbh. If some of them wallow in a (well-deserved?) deep rest, like SgrA*, others, called Active Galactic Nuclei (agn), are way more luminous and display features characteristic of accretion at stake. The present work can in no case pretend to any physical relevance in the case of agn but links exist151515As testified by the jets observed in quasars, a specific kind of accreting smbh, but also in microquasars such as GRS1915+105 (Mirabel and Rodríguez, 1994). between the two families which might turn profitable. See the book by Beckmann and Shrader (2012) for more details on agn.
Chapter 3 Numerical tools\minitoc
Fluid Mechanics resolution requires large resolutions to grasp all the relevant spatial and temporal scales111The coupling between scales is an intrinsic flaw (and strength!) of Continuum Mechanics. and smart numerical recipes to solve approximately but accurately the partial differential equations which arise. The intervention of a mathematically challenging framework within the complexity of a hydrodynamics environment makes numerical experiments game changers in contemporary Physics. The coming up of High Performance Computing (a.k.a. hpc) architectures has enabled us to envision a yet-to-be-defined breakthrough, numerical epistemology, which could soon refund the very definition of a model at the basis of the scientific method (Ruphy, 2013, Varenne and Silberstein, 2013, Varenne et al., 2014, Trescases and Tournus, 2016). The numerical modelling step, far from being an additional layer on top of the physical framework, offers an occasion to put in perspective the very notion of a model as a reduced representation of a set of empirically measurable data. Furthermore, it has proven to be a wonderful tool to illustrate, emphasize and reinforce a physical argument.
After a short reminder on the physical laws at the bottom of our work, we introduce the code we used, mpi-amrvac, the numerical scheme we followed and a few notions of High Performance Computing. For a reference describing the mathematical tools used in this chapter (tensor calculus, linear algebra, etc), see Appel (2007).
3.1 Conservation laws
Fluid Mechanics is a non formal mesoscopic average of the kinetic theory and its inherited Boltzmann equation (Diu et al., 1997). It introduces a set of variables which tap their epistemological legitimacy in their physical profitability to interpret a wide range of phenomena. With some features inspired by Classical Mechanics, we use the mass density , the velocity and the total pressure of a particle of fluid, a subvolume of the system considered whose characteristic size verifies :
is large enough to guarantee that any particle of fluid contains a large enough number of actual particles222i.e. in the meaning of the entities covered by kinetic theory. to wash out the microscopic statistical noise.
is small enough to resolve macroscopic objects of interest, typically the waves of wavelength which form and develop in the fluid.
To summarize, Fluid Mechanics can prove to be a handy frame of thought provided we can define particles of fluid which verify :
with the density of actual particles.
We can either consider the fluid variables , and (called the primitive variables) as :
a collection of values associated to all particles of fluid at a given moment . For example, would be the mass density at of the particle of fluid located with the position vector , function of . It is the Lagrangian point of view, where we "follow" the particles of fluid.
scalar and vector fields defined on an implicitly discrete mesh333Either due to the physical concept of fluid particle or to the numerical one of cell (see section 3.3.1).. For instance, would be the mass density at of the particle of fluid which happens to find itself within an infinitely small volume around the point located with at . It is the Eulerian point of view.
Both approaches are physically equivalent but as far as the numerical implementation is concerned, we will rely on an Eulerian approach (see section 3.3).
So as to facilitate compact and coordinate free formulations, we introduce the 5-components vector of conservative variables defined by :
where is the total energy per unit volume given by :
with the first term on the r.h.s. being the kinetic contribution and , the internal energy per unit volume. Finally, we will refer to the "specific" counterparts of extensive variables as the intensive quantities obtained by dividing by the mass of the system.
3.1.2 The Euler equations
Fluid Mechanics is based on sets of conservation laws supported by thermodynamical considerations. The simplest form of conservation laws, the Euler equations, is obtained by neglecting external forces, viscosity444See Landau and Lifshitz (1987) for an elegant description of the physical requirements which naturally lead to the mathematical expression of the viscosity term. and heating/cooling :
with the vector of fluxes :
where the divergence operator applied to a tensor does not relate to the one applied to a vector in a trivial way, apart in Cartesian coordinates (see formularies). Beware, in (3.5), contrary to the expression of (3.2), the three-dimensional vector unfolds along its own dimension so as to form a tensor in rather than a 8-componetns vector in . The dyadic product is defined by :
with and the coordinates of the vectors only in the Cartesian case. The expression (3.4) is a differential formulation of conservation laws, legitimate only in smooth configurations ; this assumption can be relaxed for the integral formulations which serve to derive the jump conditions in appendix A.1.
3.1.3 Closure conditions : the equation-of-state
In (3.5), can not be properly considered as a function of555The system is said to be autonomous if is function of only (it can be a function of time and space but through only). unless we are given a relation between the pressure and the other variables. This additional general requirement to solve the equations of Hydrodynamics is known as the closure condition. Considerations of Statistical Physics we will not remind here guarantee the existence of a canonical equation-of-state666With the entropy per unit volume or e.o.s., , which contains all the thermodynamical information (Diu et al., 2007). is the internal energy per unit mass. It can be shown that this e.o.s. can be splitted into a caloric and an entropic one, respectively and .
All along this manuscript, we will focus on the case of ideal gases. This thermodynamical model corresponds to a gas diluted enough to neglect the covolume associated to each particle777See the Van Der Waals e.o.s. for a quantification and a physical interpretation of this notion. and the energy of interaction between particles with respect to their kinetic energy. However, particles are considered thermalized with each other thanks to collisions between infinitely small particles such as a temperature can always be defined. In this case, the caloric888 The entropic one comes into play in the conservation laws if the temperature is explicitly needed, e.g. in a cooling function. For an ideal gas, it is given by : where is the ideal gas constant and the mean molar mass of the gas. e.o.s. is given by :
with the adiabatic index (see Diu et al., 1997, 2007, for a definition), a constant determined by the number of degrees of freedom per particle and whose value lies between 1 (infinite number of degrees of freedom) and (only 3 translational degrees of freedom) for an ideal gas. The speed of sound in the flow is then given by :
A plethora of fruitful physical submodels can be derived from the Euler equations which have irrigated mindful studies in Geophysics, Climatology (Pedlosky, 1992), city planning, etc.
3.1.4 Hyperbolic sets of partial differential equations
We now provide some elements of solution for the uni-dimensional Euler equation999 and now. so as to highlight the fundamental mathematical properties of this system at the basis of its computational resolution. Since conservation laws are quasi-linear, they can always be rewritten after using the chain rule as :
where the symbol is to be understood as a matrix product and is the jacobian matrix :
In the case of an ideal gas, we have, after having cautiously expressed the coefficients of exclusively as combinations of full coefficients of (no of left alone) :
which can be diagonalized to yield the following eigenvalues and eigenvectors :
Since the eigenvalues are all real and distinct, the Euler equations form a set of strictly hyperbolic partial differential equations. This property remains true in three dimensions and for more general e.o.s.. If we write the matrix formed with the eigenvectors as columns and the diagonalized form of , we can rewrite te Euler equations as :
If is smooth enough with respect to its evolution with space and time, we can write :
and we are left101010Within a local region of space and time. with a set of decoupled uni-dimensional linear advection equations for the variable . The use of the so-called characteristics then provides an extraordinary way to transform those partial differential equations into ordinary differential equations111111If the solving of partial differential equations in general remains a tremendous challenge of contemporary Mathematics, the realm of ordinary differential equations has been extensively explored in the past couple of centuries. (Shu, 1992). The eigenvectors121212Associated to discontinuous, rarefaction and shock waves. form a set of linearly independent vectors from which all solutions - albeit local if depends131313Integral curves of the characteristic family can be used as non-linear extensions of the characteristics straight lines associated to linear hyperbolic systems (i.e. systems where does not depend on ). This approach brings up the notion of Riemann invariants as conserved quantities along the waves. on - can be derived. The selection of a specific solution requires the prior of the initial condition and will be investigated in section 3.2.2 where the Riemann problem is addressed.
3.2 Numerical scheme
The code I have used, MPI-AMRVAC141414For Message Passing Interface - Adaptive Mesh Refinement Versatile Advection Code, sometimes referred simply as VAC in this manuscript (not to be confused with the initial version named alike). Complementary details about the structure of the code can be found in Appendix A.5. is the latest version of a code whose origins trace back to the mid 90’s when Gábor Tóth and Rony Keppens first tackled the question (Tóth and Odstrcil, 1996, Tóth et al., 1998). It is an explicitely flux-conserving finite volume transport code which can now address hydrodynamical or magneto-hydrodynamical problems, in a classical, a special or a fully relativistic framework, with or without polytropic prescriptions, source terms, etc (Porth et al., 2014).
The recourse to a coordinate and dimensionality independent syntax151515Thanks to the Loop Annotation SYntax a.k.a. LASY (Tóth, 1997). A Perl preprocessor, VACPPL, takes care of converting the .t files from LASY to proper Fortran .f files, using the user defined settings on dimensionality and coordinates but more generally, about the mesh structure. enables users and developers to freely design algorithms and configurations which apply to a broad range of situations. It is an Eulerian-based code where physical quantities are defined at the center of each cell as averaged values over the whole cell (see section 3.2.2). The source and user files are designed so as to make a change of coordinates straightforward. Natively, it covers the usual orthogonal curvilinear coordinates (Mignone, 2014) : cartesian, cylindrical and spherical. They are uniform in the sense that cell centers are equally spaced along each dimension. Thanks to the implementation of Adaptive Mesh Refinement (amr) made by van der Holst and Keppens (2007), different levels of spatial resolution161616Adaptive Time Stepping (ats) has not been yet implemented in VAC but can be found, for instance in the RAMSES code - see Commerçon et al. (2014) for a case where ats proves very profitable once coupled to amr. can be explored and treated with different numerical schemes.
Customized data analysis can be carried on internally. Alternatively, many different ouptut formats are available beyond the basic data binary files used by the code : vtk for visualisation with VisIt or Paraview, Native Tecplot, dx, idl. An xml-based module naturally handles the mesh-related information171717Including the informaton relative to amr. to produce versatile and storage-saving output files.
3.2.2 Algorithmic recipes
The following sections are intended to give the reader a sensibility about the main algorithmic issues one can encounter in numerical simulations of fluid dynamics on a grid. More detailed can be found in the related references (LeVeque, 1992, 2002, Toro, 2009, Dullemond, 2009).
Starting from conservation laws such as (3.4), the spatial and temporal derivatives must be evaluated to advance the computation of an approximate solution at any time from an initial condition. Several approaches can be undertaken to handle the discretisation inherent to the computational tool. Among others can be found :
the finite difference method : the most straightforward approach is to associate to a collection of nodes a collection of values. Each variable is a set of point values attached to a set of unstructured positions (i.e. the mesh is not necessarily associated to a regular grid). In this way, the spatial derivative of a quantity in one point is simply given by :
where we used the super and subscripts accordingly to Figure 3.1. There is no cell nor interface in this approach so this equality suffers no ambiguity : it is the only way to have a fair accounting of the contribution of both sides. The system can then be solved by evaluating the fluxes above with or , which gives respectively the explicit or the implicit method. The latter requires more computation181818Matrix inversion, Newton-Raphson root finding, etc. and couples all the points together which makes little sense for hyperbolic system where the information propagates at finite speeds. However, it sometimes has precious assets like stability and convergence, but is not necessarily accurate. The finite difference approach is easier to implement, fast to compute but does not extend to complex geometries nor to amr.
the finite volume method : in this approach, we solve the integrated form of the conservation laws separately in each cell of the grid. For instance, for the conservation of mass :
where we used the Green-Ostrogradsky theorem. Also, refers to all the points within the volume , not to a hypothetical cell center. is the total flux of mass through the cell edge. There, the discretization can be made on one hand by using an average value associated to the cell :
and on the other hand, with a set of approximated fluxes at the left and right interfaces, respectively and :
Notice that this method is conservative by construction191919At least up to the computation error level of in double precision. since :
where and are the fluxes at the edges of the grid considered in this implicitly mesh-based approach. All the secret of the numerical recipe will be contained in the way the fluxes at the interface are determined, a method referred to as reconstruction.
The Riemann problem
The simplest non-trivial initial value problem we can make up for uni-dimensional conservation laws is :
with and constant and different. It is called the Riemann problem, a mathematical generalization of the physical Sod’s shock tube problem. Given the finite volume point of view we adopt, we can already understand the paramount importance of this reduced problem which mimics the situation at each interface : on each side of the interface, the function is constant. Indeed, the solution of the general initial value problem on a grid may be seen as resulting from non-linear superposition of solutions of local Riemann problems. Godunov’s methods are based on this principle. At each interface a Riemann problem will be solved such as over a timestep, the propagating characteristic waves do not overlap.
Accordingly to the finite difference approach, one could want to use a central differencing scheme to resolve, say for the sake of clarity, the uni-dimensional linear advection equation of a step-function with a constant advection velocity :
Its differentiation in a naive cell-centered approach suggests to replace the fluxes with :
In an explicit approach, it means that is given by, for an equally-spaced grid :
The numerical result of such a gross approximation is illustrated in the upper panel of Figure 3.2 where all the wriggles below have little to do with the mathematical solution (the dotted line) and much to do with the numerical scheme. This numerical scheme yields clearly unstable solutions for this problem. The main issue lies in the fact that, to update the value of , we make use of information from upstream202020i.e. the region (resp. ) if (resp. if )., but also from downstream, which does not make any physical sense : the information can not go back up212121Note that in this simple situation, there is no other wave speed than (not even the sound speed).. Anything which happens in the flow downstream does not belong to its domain of determinacy and can not influence its value at this instant. In hyperbolic systems, information propagates at finite speeds through characteristics. We can use this causality property to design one-sided derivatives in the direction from which information should be coming222222With possibly different directions for different information if there are different characteristic velocities, typically for a system of several equations i.e. not for the canonical one-dimensional linear advection one. : it is the bottom line of the first order upwind scheme we present here.
Since , we use the value on the left to define the fluxes at the interface in (3.19) :
which gives the convex combination of and to get :
where the weights can also be obtained by following the characteristics and interpolating the value of between the grid values232323For instance, graphically, on the middle panel of Figure 3.3 where we see that : at , the cell has been overran up to 67% of its own space by the flow which was in at . and . This simple workaround leads to the results displayed in the bottom panel of Figure3.2. The scheme proves way more stable and is monotonicity preserving (see the comprehensive course by Dinshaw Balsara on techniques for numerical solving of partial differential equations) but the numerical diffusion of this first order scheme smeared out the step function.
In (3.26) appears a necessary condition which turns out to apply to any explicit differencing method : the Courant-Friedrichs-Lewy condition (a.k.a. cfl condition). Since the value in at depends only on the value in neighbouring cell at , we must make sure that the characteristics from (if ) does not go beyond within one timestep. More generally, if we write the maximum absolute propagation speed for information, we need242424Implicit methods can, to appearances, beat this limit but from experience, they start to lack accuracy when the timestep becomes much larger than . Besides, their strong stability makes it difficult to point out suspicious behaviour by eye. :
Empirically, a Courant number smaller than 0.5 is usually enough to guarantee the stability and the accuracy of the numerical scheme.
where and take place at the first power, hence the first order. This expression highlights the unavoidable artificial diffusivity introduced by the spatial discretization in the second term of the r.h.s.. It can be lowered by rising the resolution or using higher-order schemes, for instance with slope limiters or larger stencils252525A mini-grid of points which contribute to the quantities we wish to evaluate. to compute the derivatives. For time, it means using a predictor to compute the fluxes at mid-timestep to reach a second order precision in time. A final remark : higher-order schemes are more accurate only if there is no strong discontinuities in them according to the two last terms on the r.h.s. of (3.28). When shocks take place, specific shock-capturing schemes must be used.
Our algorithmic setup
In most of the simulations presented in this manuscript, we used a second order Lax-Friedrichs method with a Hancock predictor. Thanks to the use of slope limiters262626Mostly the shock-capturing Koren slope limiter, less diffusive than a minmod., the scheme is made Total Variation Diminishing which means that no new local extrema are created (it is monotonicity preserving) and the values of local minima (respectively maxima) increase (respectively decrease). The Courant number is typically set to 50%. Although quite diffusive, the tvd-lf method is robust and not prone to give birth to spurious oscillations.
3.3 Splendors & miseries of hpc
3.3.1 Architecture of the grid and boundary cells
To make the most of the octree at the basis of the amr structure, the simulation space (a.k.a. grid) is initially subdivided into blocks of equal sizes. We write the number of cells of the grid in the direction272727In the code, the dimensionality of the grid is given by the variable ndim and we have and the number of cells of a block in the same direction. A first obvious constraint is that must divide . One of this two dimensional cartesian grid has been represented on Figure 3.4. The cells on the border of each block282828The border being defined with any "thickness" (e.g. 1, 2 or 4 cells), provided it remains small compared to (see section 3.3.4). must be communicated to the cpu 292929For Central Processing Unit. responsible for the computation in the neighbouring blocks (possibly itself) to compute spatial derivatives. Once the run starts, the code can refine303030Or coarsen previously refined ones. some areas thanks to amr, based on user-defined criteria or standard error estimates, and divide up the existing blocks, down to a specified maximum number of refinement. Without loss of generality, we will now assume that the cells are "squared" which means that the and are the same whatever the direction . The two-dimensionality of the meshes we consider from now on can also easily be relaxed and does not alter qualitatively the following remarks.
On the right side of Figure 3.4 are displayed the boundary cells, those whose value must be communicated to neighbouring blocks to compute spatial derivatives. The user can specify the "thickness" of this layer with the parameter dixB in VAC, here called . One can convince himself that the square configuration is ideal to minimize the fraction of cells at the interface with other blocks, at a fixed total number of cells313131See the classical optimization problem where it is asked to show that the sphere, without any privileged direction like this Cartesian mesh, is the minimal surface of a given volume, or that the height of a can must be equal to its diameter to minimize the surface of the associated cylinder, at a fixed volume..
In this section 3.3, we consider static grids which are not destined to be refined. The number of blocks is set initially by the user-defined parameters and does not change during the simulation. The use of amr obviously changes those conclusion but the monitoring of the communication we describe below is not as critical since the number of blocks per cpu and the total number of cpus are large enough to allow slight unbalanced working load without dramatic consequences.
The philosophy of the code is to strongly compartmentalize the computation on each block so as they can autonomously be monitored by different cpu, possibly on different nodes, without excessive communication time. Furthermore, one must keep in mind that the number of cpus selected, , can not be larger than the initial333333In case of amr where the user usually starts with a limited number of blocks, the simulation can be performed on a short number of iterations323232Also called "time step" henceforth., enough to get a large number of blocks, and then rerun using the output data files as initial state. number of blocks given in two-dimensional meshes by ; we will see that it is the main limitation for strong scalability of the code in the debugging stage of development. We write the number of blocks per cpu; to avoid underloading cpu which seems to slow them down, we set to 4 to guarantee that each cpu has a sufficient number of cells to work on. To assure a good load balancing, we also want each cpu to work on the same number of blocks so must divide the total number of blocks . Besides, to guarantee that the "squared cells" assumption remains true at the level of blocks per cpu 343434See the grey shaded group of blocks on Figure 3.8 for example., we require to be a squared quantity (1, 4, 9, 16…). To maximize the possibilities and make the discussion easier to follow, we work with powers of 2 from now on. The following results can easily be extended to any sizes. The communication between cpus is monitored within VAC thanks to the subroutines of the OpenMPI library.
3.3.3 Computation speed and profiling
We define the computation speed per cpu as :
where is the number of realized operations per iteration353535Performed at the scale of Arithmetic Logic Units, a subscale within the cpu., is the total number of time iterations required to complete the simulation363636Difficult to evaluate a priori if a laps of physical time -rather than a number of steps - has been specified since the time step can change during the simulation. and is the total elapsed real time in the user frame. VAC offers the possibility to evaluate at each time step. From our experience, it can vary a lot for a given numerical setup373737And a given number of blocks and cells per cpu., from on local university clusters up to a few on national ones. On top of this, it can also vary by a factor up to 20 from a timestep to another during a given simulation. Memory leaks, compilation options and static/dynamic library linking can, among others, be responsible for it. It can also come from the code which must be monitored by using a profiler such as Vampir in association with VampirTrace, its runtime library383838In parallel computing, caveats in the algorithms can not be traced by simple print statements due to the intrinsically non-linear work flow. The very step of writing requires additional precautions when it deals with parallel computing.. The use of a profiler is mandatory once an anomalously slow computing velocity has been spotted to look for reasons within the code : an error estimate to refine and coarsen the grid where needed, the correction of excessively low density or pressure393939First, there is the physical fallacy to use the hydrodynamics equation (based on the continuous medium assumption) in an environment where the mean distance between particles overcomes the characteristic size or wavelength of the phenomena we are interested in. Second, having large Mach numbers (above a few 10) enlarges the relative numerical error we make on computing the pressure for instance since we get it from the difference between the total and the kinetic energies which become very similar. Large contrasts in density within the simulation space can also lead to artificially high velocities because of errors at the level of the solver (which work with conservative variables) which lead to excessively small timesteps (due to the cfl condition). Those points are the main reasons to set a floor density and pressure., a data file being written, a data file being converted into a visualization file, etc.
We give a few elements of scalability theory to the reader on the basis of (3.29). It is an essential feature for high performance computing code which certifies its capacity to run efficiently on a large number of cpus. Two kinds of scalability can be defined :
Strong scalability : a code is said to be strongly scalable if does not decrease excessively as the number of cpus rise, all other things being kept equal (in particular the number of grid cells, playing a role in , must remain the same). It is absolutely necessary for a code to be scalable up to an intermediate number of cpus (e.g. 16 to 32) to be able to debug the code upstream and work out numerical caveats before running it on major facilities and on many cpus. Indeed, the debugging step, always on a small number of cpus, requires in priority small to appreciate the reactions of the code to modifications, which drives the user into using a low resolution. To decrease while debugging, the user then wants to rise up to 16 or 32 cpus for instance.
Weak scalability : in practice, the simulation space can not be subdivided into an infinitely large number of blocks404040For codes compatible with multithreading (and gpu computing), it is theoretically possible to have more processing units than blocks but still, the quick rise in communication time would make the computation speed drop., each being monitored by exactly one cpu as explained in 3.3.2. To avoid the caveats which appear when a cpu is underused or when communication becomes preponderant, one can monitor the evolution of as the total number of grid cells rises in the same proportions as . Then, the work load per cpu remains approximately similar. A code is said to be weakly scalable if the total elapsed time remains fairly constant as the resolution and the number of cpus rise.
Figure 3.6 can be used to illustrate those notions. As one moves along an horizontal line from left to right, the total number of cells remain the same as the number of cpus rises : we evaluate the strong scalability of the code. The weak scalability corresponds to the bottom-left to upper-right diagonals along which the number of cpus and cells is multiplied by 2 each time.
Finally, notice that the core operations to perform when a job is run on a cluster, , can be separated into two main categories :
the algorithmic ones, depending on the numerical scheme and its order for instance.
the communication steps to transfer the values found in the boundary cells to the neighboring blocks.
The shrinking of the latter is the topic of the last section of this Chapter and the former can be empirically minimized but obeys to mathematical constrains which can not always be bypassed such as the cfl condition. If communication between nodes is seamlessly fast on modern servers (with matchless InfiniBand414141A computer-networking communications standard which can transfer up to 100Gbits. technology on the best clusters), it is of critical importance on local clusters. It is the user responsibility to solve an optimization problem when and are chosen for a given so as to minimize the communication time : a larger number of cpus, all other things being equal, does not necessarily result in a lower amount of total computing time . It is also the only way to isolate the errors intern to the code from issue