Dynamical Interferometric Imaging

# Dynamical Imaging with Interferometry

Michael D. Johnson11affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA , Katherine L. Bouman11affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA 22affiliation: Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory, 32 Vassar Street, Cambridge, MA 02139, USA , Lindy Blackburn11affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA , Andrew A. Chael11affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA , Julian Rosen, Hotaka Shiokawa11affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA , Freek Roelofs33affiliation: Department of Astrophysics, Institute for Mathematics, Astrophysics and Particle Physics, Radboud University, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands , Kazunori Akiyama44affiliation: Massachusetts Institute of Technology, Haystack Observatory, Route 40, Westford, MA 01886, USA , Vincent L. Fish44affiliation: Massachusetts Institute of Technology, Haystack Observatory, Route 40, Westford, MA 01886, USA , and Sheperd S. Doeleman11affiliation: Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA
###### Abstract

By linking widely separated radio dishes, the technique of very long baseline interferometry (VLBI) can greatly enhance angular resolution in radio astronomy. However, at any given moment, a VLBI array only sparsely samples the information necessary to form an image. Conventional imaging techniques partially overcome this limitation by making the assumption that the observed cosmic source structure does not evolve over the duration of an observation, which enables VLBI networks to accumulate information as the Earth rotates and changes the projected array geometry. Although this assumption is appropriate for nearly all VLBI, it is almost certainly violated for submillimeter observations of the Galactic Center supermassive black hole, Sagittarius A (Sgr A), which has a gravitational timescale of only  seconds and exhibits intra-hour variability. To address this challenge, we develop several techniques to reconstruct dynamical images (“movies”) from interferometric data. Our techniques are applicable to both single-epoch and multi-epoch variability studies, and they are suitable for exploring many different physical processes including flaring regions, stable images with small time-dependent perturbations, steady accretion dynamics, or kinematics of relativistic jets. Moreover, dynamical imaging can be used to estimate time-averaged images from time-variable data, eliminating many spurious image artifacts that arise when using standard imaging methods. We demonstrate the effectiveness of our techniques using synthetic observations of simulated black hole systems and 7mm Very Long Baseline Array observations of M87, and we show that dynamical imaging is feasible for Event Horizon Telescope observations of Sgr A.

accretion, accretion disks – black hole physics – Galaxy: center – techniques: high angular resolution – techniques: interferometric
slugcomment: Accepted for publication in ApJ, October 31, 2017

## 1. Introduction

Very long baseline interferometry (VLBI) provides exceptional angular resolution but only sparsely samples the Fourier components of an image. A powerful technique to enhance interferometric imaging utilizes the Earth’s rotation – as the Earth rotates, each baseline connecting an antenna pair tracks through, and samples, a range of image Fourier components (see, e.g., Ryle, 1962; Kellermann & Moran, 2001; Thompson et al., 2017). In its conventional implementation, Earth rotation synthesis imaging assumes that the source being imaged is static over the observing duration (typically hours). This assumption is reasonable for nearly all astrophysical sources of interest, although a few sources have shown detectable structural changes within a single observation (e.g., Reid et al., 2014), most commonly through rapid swings of polarization angle (e.g., Gabuzda et al., 2000).

One notable case for which the static source assumption is likely to fail is the Galactic Center supermassive black hole, Sagittarius A (Sgr A). Because Sgr A has a mass of approximately (Ghez et al., 2008; Gillessen et al., 2009), its gravitational timescale is only and its innermost stable circular prograde orbits have periods of only minutes, depending on the black hole spin (Bardeen et al., 1972). In terms of observed variability, Sgr A regularly flares with hour timescales (e.g., Marrone et al., 2008; Yusef-Zadeh et al., 2009; Brinkerink et al., 2015), and its polarization shows intense variations on similar timescales (e.g., Marrone et al., 2006; Eckart et al., 2006; Zamaninasab et al., 2010; Johnson et al., 2015b).

Until recently, limitations from optical depth and interstellar scattering have prevented studies of rapid structural variability of Sgr A using VLBI (e.g., Bower et al., 2006; Lu et al., 2011). However, the advent of 1.3-mm VLBI with the Event Horizon Telescope (EHT) will soon permit imaging Sgr A on spatial scales for which intrinsic variability may be significant (Doeleman et al., 2009a). Pronounced variability with accompanying structural change has already been seen in the polarization of Sgr A with the EHT (Johnson et al., 2015b), although the total-intensity structure of Sgr A has remained comparatively stable (Doeleman et al., 2008; Fish et al., 2011, 2016b). In addition to these observations and the short characteristic timescales of Sgr A, numerical simulations suggest that conventional VLBI imaging techniques will be inapplicable for EHT observations of Sgr A (see Figure 1 and, e.g., Broderick & Loeb, 2006; Doeleman et al., 2009a; Dexter et al., 2010; Johnson et al., 2015a; Lu et al., 2016; Medeiros et al., 2016; Kim et al., 2016; Medeiros et al., 2017; Gold et al., 2017; Roelofs et al., 2017).

Nevertheless, interferometry provides capabilities to study rapidly varying structures. For example, using simulated observations of a “hot spot” orbiting Sgr A (Broderick & Loeb, 2005, 2006), Doeleman et al. (2009b) and Fish et al. (2009) demonstrated that robust VLBI observables can sensitively detect periodicities associated with these hot spots. More generally, Johnson et al. (2014) showed that polarimetric VLBI enables microarcsecond astrometry of compact flaring structures, even for faint, non-periodic flares. Even conventional Earth rotation synthesis utilizes time-variable Fourier sampling to enhance imaging, and Johnson et al. (2015a) argued that intrinsic variability of a source can be exploited in the same way if the source variability can be modeled (see, e.g., Sault et al., 1997). As Figure 1 shows, while intrinsic variability of Sgr A may readily break the static source assumption of conventional imaging, it also provides a rich source of information about the intrinsic variability.

In this paper, we develop techniques to reconstruct dynamical images (i.e., movies) from interferometric data. By accommodating intrinsic variability in the imaging procedure, we can study the dynamical activity of a source while avoiding spurious image features from the static-source assumption of conventional imaging algorithms. In a related approach, Lu et al. (2016) have recently developed a prescription for scaling, averaging, and smoothing interferometric visibilities; the processed visibilities can then be imaged using standard VLBI imaging techniques.111This prescription is motivated by linearity of the Fourier transform: complex visibilities of the time-averaged image are equal to time-averaged visibilities of a variable image. See Shiokawa et al. (2017) for generalized time-domain filtering of images. They show that the resulting images are good approximations of the time-averaged image, especially when data from multiple observing epochs can be combined. Our focus is instead on reconstructing dynamical images of the time-variable source, while obtaining reliable approximations of the time-averaged image as a by-product.

Our work is a generalization of the standard regularized minimization approach to VLBI imaging, which includes approaches such as the maximum entropy method (MEM; see, e.g., Narayan & Nityananda, 1986) and many other regularization functions. This approach, while not a strictly probabilistic model, can be motivated through a Bayesian framework wherein the minimization corresponds to maximizing the log posterior probability of a reconstructed image. In a separate paper, we explore an alternative approach to dynamical imaging via a modified Hidden Markov Model with a multivariate Gaussian image prior, and we derive closed-form expressions for both the maximum a posteriori image and its uncertainties (Bouman et al., 2017).

We begin, in §2, by reviewing the standard framework and procedure for VLBI imaging through regularized minimization, and we then generalize this framework to accommodate dynamical imaging. Next, in §3, we develop three regularizers that can be used for dynamical imaging for a variety of scenarios. In §4, we discuss using dynamical imaging for temporal interpolation. In §5, we show example results using simulated data and Very Long Baseline Array (VLBA) observations of M87, and in §6 we summarize our main results and conclusions.

## 2. Fundamentals of Interferometric Imaging

### 2.1. Interferometric Visibilities

Each baseline joining two sites in an interferometer samples complex visibilities. By the van Cittert-Zernike theorem, these visibilities, are related to the brightness distribution on the sky via a Fourier transform (Thompson et al., 2017):

 V(u) =∫d2xI(x)e−2πiu⋅x, (1)

where is an angular coordinate on the sky, in radians, and is the dimensionless baseline vector, in wavelengths, projected orthogonal to the line of sight.

Interferometry uses a set of measured visibilities to estimate the unknown sky image , as we will discuss in §2.2. However, when the image is also a function of time, the sampled visibilities at a particular time only represent the corresponding, instantaneous image. In this case, a series of images can be reconstructed if each utilizes only its simultaneous “snapshot” visibility coverage. With participating sites with mutual visibility of the source, there are at most visibilities in the snapshot coverage, severely limiting the imaging capabilities when is small (see, e.g., Figure 2).

### 2.2. Interferometric Imaging via Regularized Minimization

We will now review the standard prescription for VLBI imaging via regularized minimization. This prescription encompasses many common approaches to VLBI imaging, such as the maximum entropy method (MEM; see, e.g., Frieden, 1972; Cornwell & Evans, 1985; Narayan & Nityananda, 1986) and many variants (see, e.g., Thiébaut, 2013; Honma et al., 2014; Lu et al., 2014; Bouman et al., 2016; Chael et al., 2016; Fish et al., 2016a; Akiyama et al., 2017a, b) but does not describe iterative deconvolution approaches such as CLEAN (Högbom, 1974). The flexibility of the regularized minimization framework makes it ideal for sparse and heterogeneous arrays, such as the EHT, and also allows extensions to include, e.g., mitigating the image distortions caused by interstellar scattering (Johnson, 2016).

To simplify our presentation, we will represent reconstructed images as square arrays, giving flux density per pixel. We denote a sequence of images by , where indexes the time for different frames. In the following sections, we will generally treat each image as a vector of length rather than an matrix. Linear operators such as the Fourier transform relating images and interferometric visibilities, blurring via convolution, and discrete gradients are linear and can therefore be represented as matrix operators that act on these one-dimensional image vectors (of course, elements of these operators depend on the two-dimensional nature of the images).

Approaches such as MEM estimate the unknown source image by numerically minimizing an objective function, . contains terms that express whether or not an image is consistent with the input VLBI data (a chi-squared term) and also contains terms that favor certain image attributes (such as smoothness or positivity through an entropy or other regularization term). The objective function then takes the form

 J=χ2(I,d)−αSS(I). (2)

In this expression, denotes the regularization function for the imaging (e.g., is commonly used for MEM), and represents a chi-squared for whatever data products are used as part of the imaging. is a “hyperparameter” that controls the relative weighting of the entropy and data terms. The hyperparameter can be adjusted manually or automatically to yield the expected for a satisfactory image (e.g., Cornwell & Evans, 1985) or it can be estimated via cross validation, wherein the data are divided into training and testing sets and the hyperparameters are chosen so that images reconstructed using the training set are compatible with the measurements and errors of the testing set (see Akiyama et al., 2017a). From a probabilistic perspective, the term in Eq. 2 corresponds to a log-likehood while the regularization term corresponds to a log prior distribution of the reconstructed image.

### 2.3. General Prescription for Dynamical Imaging

We now extend this framework and notation to dynamical imaging. In this case, the imaging problem is to simultaneously reconstruct different frames . Each frame has an associated entropy, and we will average the frame entropies to give a single representative value. Also, the data chi-squared term must be updated so that each data point is compared with its simultaneous reconstructed image. Finally, we will add a new term with an associated hyperparameter to regularize the dynamical images (we use to label different choices for this term). This additional term can enforce expected properties such as continuity from frame to frame, a stable average image, or stable motion. The objective function for dynamical imaging then takes the form

 J=χ2({Ij},d)−αS[1NtNt∑j=1S(Ij)]+αxRx({Ij}). (3)

Note that multiple dynamical regularizers can easily be combined in this framework, and additional regularization terms could be added (e.g., to mitigate interstellar scattering; Johnson, 2016). The main purpose of this paper is to develop effective and efficient choices for the dynamical regularization terms and to test their performance on a variety of simulated data for the EHT.

### 2.4. General Considerations for Dynamical Imaging

Before developing specific strategies for dynamical imaging, it is instructive to consider how intrinsic variability can affect image reconstructions that assume a static source. Each baseline changes slowly with the Earth’s rotation, so variability of an image on much shorter timescales introduces variations in measured visibilities over small baseline displacements . From Eq. 1, we see that variations in the visibility over require that the image flux extends over an angular scale . This mathematical uncertainty relationship arises because the variables of spatial position () and spatial wavenumber () are Fourier conjugates. One consequence of this property is that, for a static image, variations in the complex visibility seen over a baseline displacement of can be used to infer the image field of view (FOV) without requiring detailed imaging. Likewise, the image field of view determines a maximum averaging time for visibilities sampled from a static image (see §6.4 of Thompson et al., 2017). However, for a variable source interpreted in the context of a static image, rapid intrinsic variability implies the existence of spurious image structure on large scales.

We now consider some specific examples. First, suppose that variations in the visibility amplitude are seen on a timescale of 5 minutes for an EHT baseline of length . In this case, . These variations would then imply an image extent of roughly . Note that this inferred extent is an order of magnitude larger than the measured size of Sgr A at (; Doeleman et al., 2008). In addition, the snapshot visibilities can be compared across the exceptionally wide bandwidths of the EHT (4 GHz in 2017, and 18 GHz of spanned bandwidth in 2018 via dual-sideband recording). These also provide . Thus, visibilities that vary on timescales of minutes but that are stable across the full EHT bandwidth would provide firm evidence of rapid intrinsic variability.

As another trivial example, no static image can describe data in which the total flux density (i.e., the zero-baseline visibility ) is changing with time. The problems of imaging a variable source are further exacerbated with multiple sites because different baseline tracks can cross so that the same spatial Fourier component is sampled at multiple times (see Figure 2).222For EHT observations of Sgr A, SPT-PV and SPT-SMT baselines are very close in u-v space but have a hour offset in sampling. Also, the ALMA-LMT and ALMA-SMT tracks intersect with a time offset of 1.4 hours. See Figure 2.

As these examples illustrate, in some cases intrinsic variability can be robustly decoupled from extrinsic sampling variability (from a changing baseline with the Earth’s rotation) by constraining the image FOV (effectively imposing an image prior). In Lu et al. (2016), the authors use temporal filtering and normalization of measured visibilities to mitigate intrinsic variability; their chosen filter parameters effectively impose a maximal FOV. However, the strategy of post-processing visibilities has some limitations relative to an image-based approach; for instance, visibility domain smoothing with a baseline-based algorithm does not account for mismatched visibilities on crossing baseline tracks. More generally, visibility-domain averaging of robust observables such as closure phases and closure amplitudes can introduce bias in the measurements. Dynamical imaging addresses both these limitations, providing a framework in which the intrinsic variability is incorporated into the imaging model, so that measurements can be directly compared with reconstructed images without additional averaging.

## 3. Regularizers for Dynamical Imaging

We now derive three regularizers appropriate for dynamical imaging. Our motivation is to identify regularizers that reflect a range of expected properties for astrophysical cases of interest and that also are efficient to implement in a numerical minimization scheme. Our first regularizer only enforces continuity from frame-to-frame (§3.1), the second favors frames that are small perturbations from the time-averaged image (§3.2), and the third describes an image that evolves approximately as a fluid with a steady motion field (§3.3). We summarize the properties of these regularizers in §3.4.

### 3.1. Smoothly Varying Images Over Time

We first develop a generic regularizer that only seeks to enforce continuity from frame to frame in reconstructed images. Because the motion between frames is unknown and may not be constant in time, this regularizer compares the reconstructed flux density of a pixel at one time with the flux density of nearby pixels at a subsequent time. The appropriate definition of “nearby” depends on the product of the expected velocity of moving features and the frame interval (which could potentially be irregular). Because this strategy is based on enforcing continuity over short time intervals, we denote the regularizer by .

Explicitly, we compute the summed difference among all adjacent images after blurring the frames, , using a circular Gaussian kernel with standard deviation . We will focus on two particular choices to define the distance between a pair of images. First, there is the total pixel-by-pixel squared difference:

 D2(I,I′) ≡∥I−I′∥2=∑m,ℓ(Im,ℓ−I′m,ℓ)2. (4)

A simple generalization of this regularizer is to replace the squared norm with for some fixed , .

A second option to define an image distance is the relative entropy (i.e., the Kullback-Leibler divergence):

 DKL(I,I′) =D(I′∥I)≡∑m,ℓI′m,ℓln(I′m,ℓIm,ℓ). (5)

The relative entropy is frequently used to regularize traditional VLBI imaging against a specified image prior for the reconstruction (see, e.g., Cornwell & Evans, 1985) and is also often used for multi-model image registration (Wells et al., 1996; Viola & Wells III, 1997). Note that the relative entropy is not symmetric, , and it need not be positive unless the total flux densities of the two images are equal: . Thus, useful alternatives include computing the relative entropy with respect to the normalized images (to preserve positivity of the divergence) and symmetrized versions such as or with (i.e., the Jensen-Shannon divergence).

The dynamical regularizer then takes the form

 RΔt({Ik}) ≡Nt−1∑j=1D(B(Ij),B(Ij+1)). (6)

This regularizer thereby penalizes changes between frames, with steeply decreasing penalty for changes on scales smaller than . One limitation of the regularizer is that it does not favor stable “momentum” of features between frames. In §3.3, we will discuss an alternative regularizer that favors reconstructions with smooth and stable motion between frames. In its simplest implementation, this regularization then depends on only two hyperparameters: and (see §2.3). However, note that is meaningful even in the limit (i.e., comparing the total difference between adjacent frames with no blurring applied). This limit is appropriate when the expected motion between consecutive frames is smaller than the finest resolution of reconstructed features (comparable to the nominal array resolution).

This regularization is effective in an imaging framework because the gradient (with respect to changes in each pixel of the ) can be evaluated efficiently. For example, for the distance function,

 ∂RΔt∂Ik =B([1+lnB(Ik)B(Ik−1)]δk>1−B(Ik+1)B(Ik)δk

where denotes a vector of length with every element equal to unity and the indicator function is defined to be unity when the subscripted condition is satisfied and is zero otherwise. Observe that calculating the gradient via Eq. 7 requires roughly computations, while calculating the gradient via finite differences of Eq. 6 requires roughly computations. Thus, for typically VLBI image reconstructions, which have , these analytic gradients speed up the imaging by several orders of magnitude.

Note that in Eq. 7 and throughout this paper, operations such as quotients, powers, norms (), and products of image vectors are to be computed elementwise. See the Appendix for corresponding expressions for other distance metrics.

### 3.2. A Stable Average Image with Small Perturbations

Our next dynamical regularizer is suitable for the case when each snapshot of the time-variable image can be described as a small perturbation from the time-averaged image. This case is applicable for a broad range of stationary processes, such as steady-state accretion or jet systems. Because this regularizer enforces snapshot images to be only small perturbations from the time-averaged image, we denote it .

To proceed, we approximate the time-averaged image by the average of all the reconstructed frames: . We then define to be the summed distance between the estimated time-averaged image and each reconstructed frame:

 RΔI({Ik}) =Nt∑j=1D(Iavg,Ij). (8)

As for , a convenient property of this regularization is that the gradient is efficient to compute (see Appendix).

Note that this regularization requires only one tunable hyperparameter, , determining the overall strength of the regularization. An additional blurring step could be added if individual frames occasionally have flux density in regions that are otherwise empty (e.g., to accommodate flaring behavior), but the average image will tend to act like a blurring operator so we do not expect that this step will normally be needed. Another difference between and is that is insensitive to abrupt changes between frames or even reordering of frames. In this respect, is analogous to entropy, which is unaffected by the placement of pixels in an image (see §2.2).

### 3.3. Time-Variable Images with Regular Motion

Our third regularizer is motivated by the case when an image evolves according to a regular prescription for motion – i.e., a steady flow of flux density over time. In this case, the appearance at one time largely determines the appearance at nearby times. A natural example of this case is an accretion flow, and we will denote this regularization by .

To proceed, we consider the image to be an evolving “fluid” with a stable flow vector field . We further assume that the flux density is approximately conserved between nearby frames, so the time-variable images must approximately obey a continuity equation:

 ∂I(x,y,t)∂t =−∇⋅[I(x,y,t)v(x,y)] (9) =−[v⋅∇I+I∇⋅v],

where denotes a two-dimensional spatial gradient operator. Hence, at a given time, the image and flow can be combined to estimate the image at a slightly later time:

 I(x,y,t+δt) ≈I(x,y,t)+δt∂I(x,y,t)∂t =I(x,y,t)−δt×(v⋅∇I+I∇⋅v). (10)

We can now use this approximate forward evolution to regularize multi-frame imaging. We consider a regularizer that is given by the summed difference between each frame and its predicted values based on linearized forward evolution of the previous frame (via Eq. 3.3) with a discrete spatial gradient operator replacing the continuous gradient. By only comparing adjacent frames, we relax the assumption that the flow field completely determines all forward evolution of a system from an initial state – we only seek to favor series of images that approximately respect a stable flow field over short intervals. For specificity, we will work with the regularization, in which case,

 (11) ≈Nt−1∑j=1∥∥Ij+1−Ij+m⋅∇Ij+(∇⋅m)Ij∥∥2 ≡Nt−1∑j=1∥∥Ij+1−\tiny↔Fflow⋅Ij∥∥2.

Here, we have replaced the velocity field by a dimensionless motion field , where is the discrete grid spacing of reconstructed images, is the image spacing in time, and now denotes a finite difference operator that approximates the continuous two-dimensional gradient. We have also defined the linear operator , and the second line is an approximation that only becomes exact in the continuous limit because identities such as the product rule do not hold exactly for the discrete gradient operator.333For example, the one-dimensional finite forward difference operator satisfies . Likewise, the analogous finite backward difference operator satisfies . In the present work, we keep the gradient operator general and assume smooth images with small fractional gradients so that . In this expression and elsewhere, images are treated as one-dimensional vectors, the two-dimensional vector flow is unwrapped to be a one-dimensional vector of 2D motions , and products of vectors (e.g., ) are to be computed by multiplying the vectors point-by-point (the Hadamard product). For the construction of this regularizer to be valid (i.e., for the linear approximation of Eq. 3.3 to hold), the reconstructed frames must have smooth spatial and temporal gradients; the former is enforced by the image regularization terms , while the latter is enforced by the dynamical regularization. More concretely, the time resolution should be fine enough that the vectors of the motion field do not exceed the nominal VLBI beam that describes the angular resolution of the reconstructed images (analogous to the Courant-Friedrichs-Lewy condition for numerical integration of partial differential equations; Courant et al., 1967). Thus, observations with finer angular resolution require correspondingly finer temporal resolution. However, this condition does not require observations that are spaced this closely in time; additional frames can be included that do not have corresponding data constraints (see §4).

The major difference between the flow regularizer and our previous dynamical regularizers is that, in addition to estimating all the image frames, this reconstruction strategy must simultaneously estimate the flow vector field . In the Appendix, we provide analytic expressions for the gradients of with respect to the images and flow, enabling efficient estimation of both in a non-linear minimization framework.

The motion field can also be regularized (just as the individual frames are regularized) to ensure that it varies smoothly over the image. Because is a vector field, one could potentially use the same regularizations as have been proposed for polarimetric synthesis imaging (see, e.g., Chael et al., 2016; Akiyama et al., 2017b). However, most of these choices are insensitive to the polarization direction, with the exception of total variation. We will use a closely related choice, the total squared gradient of the velocity field: . This regularizer is also commonly used in studies of optical flow, which reconstruct a flow field from a series of images rather than from sparse Fourier sampling (Horn & Schunck, 1981).444Another difference between traditional studies of optical flow and our approach is that the former assume an incompressible flow: . The gradient of with respect to the flow field is simply . Also, has an associated hyperparameter to govern its overall weight.

Alternatively, in some cases the flow may be known or may be adequately modeled with a small number of parameters (see, e.g., Bouman et al., 2017). In these cases, dynamical imaging is plausible for much sparser arrays. At the other extreme, with sufficient data, the assumption of a stationary flow can be relaxed and the dynamical imaging could allow a smoothly evolving flow field over time.

### 3.4. Summary and Asymptotic Properties of Dynamical Regularizers

We have developed three regularizers that are suitable for dynamical imaging: , , and . favors continuity from frame-to-frame within a spatial displacement tolerance determined by , favors frames that are small perturbations from the time-averaged image, and favors frames that evolve approximately according to a time-independent flow vector field, (see Figure 3 for a schematic comparison of these strategies). Each regularizer requires one associated hyperparameter, , that assigns overall weight to the dynamical regularization. also requires one parameter describing the expected angular motion of features from frame-to-frame, and requires a hyperparameter to regularize the estimated flow field. requires no additional hyperparameters. These hyperparameters can be fixed according to a priori expectations, they can be treated as Lagrange multipliers and varied to give properties such as a final reduced chi-squared of unity, or they can be estimated using cross validation (Akiyama et al., 2017a). As , each regularization strategy is equivalent to independently imaging a series of frames. Taking or would enforce a static reconstructed image, equivalent to conventional imaging, although this is not necessarily true for .

It is also possible to normalize each regularizer such that its value is unaffected by the choice of temporal (; the frame spacing) and spatial resolution (; the pixel linear dimension) of the reconstruction. As these become arbitrarily small, the dynamical reconstruction approaches a continuous representation in time and space. In particular, the limit is relevant when including interpolating frames (see §4), which enable arbitrary temporal resolution. The required normalization factor depends on the chosen distance metric. For the distance, each regularizer must be multiplied by . For the Kullback-Leibler divergence and its symmetrized variants, the normalization factor is . After applying this factor, the associated hyperparameters will be unaffected by the choice of temporal or angular resolution, assuming that the motion in each is well resolved.

## 4. Dynamical Imaging and Interpolation

Dynamical imaging also serves as a framework for temporal interpolation of images. Namely, image frames can be added, even at times when there are no corresponding data. Without dynamical regularization, these additional frames would default to the image that maximizes the entropy (typically an image with constant brightness, possibly uniformly zero). However, dynamical imaging will favor frames that respect the chosen regularizer. For example, when using , the additional frames will converge toward images that enforce continuity of features with the nearest data-constrained frames. For , frames without data will default to the estimated time-averaged image. For , unconstrained frames will interpolate according to the derived flow map. In each case, frames with missing data can inherit partial information from other times. Moreover, for the case of , frames can be intentionally spaced at finer resolution than the sampling time to ensure that the linear approximation of Eq. 3.3 is accurate. All these strategies will produce different results than a straightforward linear interpolation between images, as is commonly used to visualize multi-epoch VLBI studies (e.g., Lister et al., 2016).

However, we have found that the interpolated frames can sometimes have a different appearance than data-constrained frames. For instance, when using the regularization with , the interpolated frames are “blurred out” relative to the data-constrained frames. This blurring is unsurprising, as it helps to minimize the mean-squared difference among adjacent frames as elements of flux move in time. Consequently, the interpolated frames may achieve continuity of features but may have temporal discontinuities in the total flux density or image entropy.

To mitigate the artifacts in interpolated frames, we can directly enforce continuity of quantities such as flux density and entropy. To do so, we add corresponding terms to the objective function (Eq. 3). For example, to enforce continuity of image entropy, one can add,

 RΔS ≡Nt−1∑j=1[S(Ij)−S(Ij+1)]2, (12)

weighted by an associated hyperparameter . The hyperparameter can be adjusted so that this term allows continuous variations of the entropy among frames without forcing the entropy of each frame to be equal. The gradient of this term is straightforward to compute in terms of the single-image gradients:

 ∂RΔS∂Ik =2{[S(Ij)−S(Ij−1)]δk>1 (13) +[S(Ij)−S(Ij+1)]δk

Likewise, to make the total flux continuous from frame to frame, we can add

 RΔF ≡Nt−1∑j=1[F(Ij)−F(Ij+1)]2, (14)

weighted by an associated hyperparameter , where denotes the total flux density of an image. The gradient is again straightforward to compute:

 ∂RΔF∂Ik =2{[F(Ij)−F(Ij−1)]δk>1 (15) +[F(Ij)−F(Ij+1)]δk

where is a vector with each element equal to and of length equal to the number of pixels in image .

## 5. Examples of Dynamical Imaging

We will now show a few representative examples of dynamical imaging using simulated VLBI observations, and we will discuss general trends that we have identified. We conclude this section with an example showing frames from dynamical imaging of M87.

### 5.1. Implementation and Procedure

We implemented the dynamical regularizers developed in §3 as an extension to the eht-imaging Python library, which was originally developed for polarimetric VLBI imaging (Chael et al., 2016). This library provides a modular and flexible imaging framework that can utilize a variety of imaging regularizers (e.g., entropy, total variation, and ) and arbitrary combinations of data constraints (e.g., complex visibilities, the bispectrum, or closure quantities). We also used this library for generating synthetic data. Except when noted otherwise, we chose observing parameters that correspond to the 2017 EHT: an observing bandwidth of 4 GHz and site system equivalent flux densities given in Table 1. For simplicity, our simulated observations of Sgr A account for sensitivity losses from the blurring effects of interstellar scattering (Fish et al., 2014) but not irregular, refractive effects (Johnson & Gwinn, 2015).

The fundamental interferometric data product is the sampled complex interferometric visibility (Eq. 1). However, because of a large stochastic contribution from the atmosphere to each site’s phase, high-frequency VLBI arrays can typically only measure quantities such as closure phase robustly (Thompson et al., 2017). Imaging algorithms can then work with these robust data products directly (see, e.g., Buscher, 1994; Baron et al., 2010; Chael et al., 2016; Bouman et al., 2016; Akiyama et al., 2017a). Nevertheless, in the near future, improved techniques such as simultaneous subarrayed observations of calibrators (see Broderick et al., 2011) may provide absolute phase information. Also, we expect dynamical imaging to be applicable at the lower frequencies where phase referencing is routine; e.g., observations with the VLBA at wavelengths of and longer. Thus, we will show results both when using complex visibilities and when using only visibility amplitudes and closure phase.

To minimize the objective function given by Eq. 3 (i.e., to perform dynamical imaging), we used the non-linear minimization package optimize.minimize of SciPy (Jones et al., 2001–). We used the Limited-Memory BFGS algorithm (L-BFGS) (Byrd et al., 1995) except when memory requirements to store the partial Hessian became prohibitive (generally when imaging 100 frames simultaneously), in which case we instead used the conjugate gradient algorithm implemented in SciPy (which does not compute the Hessian).

Similar to conventional VLBI imaging, convergence to the minimum of the objective function for dynamical imaging can be challenging because of the extremely high-dimensional () parameter space surveyed. Convergence is especially challenging when using only robust VLBI observables, such as closure phases, rather than complex visibilities because the relative image centroid among frames is only constrained by the dynamical regularization. Consequently, we used a number of strategies to assist convergence, most involving multiple iterations of minimization with modified initial values. One particularly effective strategy for avoiding local minima, following Chael et al. (2016), is to repeatedly image the data, re-initializing the minimization each time to be equal to the previous reconstructed images convolved with the nominal VLBI array resolution. In cases with many high-quality data points for each snapshot, we iterated between imaging all frames and allowing convergence to proceed on individual frames independently. For , we repeatedly re-initialized all frames to the current average image. For , we repeatedly re-initialized the flow to be uniformly zero. In all cases, we determined the dynamical imaging hyperparameters by making them as large as possible while still achieving a final reduced near unity.

One modification to the prescription outlined above that we did find to be effective for larger arrays, such as the VLBA, was to apply the dynamical regularizers to the logarithm of the reconstructed images rather than to the images when using the or distance function (here, we assume image positivity). This change helps the dynamical regularization to improve time variable imaging of faint image features and significantly improved reconstructed images with dynamic range , although it is unnecessary when using the relative entropy distance function .

To assess the fidelity of reconstructed images when the true (model) image is known, we utilize the (normalized) mean squared error (MSE):

 MSE≡⎡⎣∑ℓ,m(Iℓ,m−I′ℓ,m)2⎤⎦/⎡⎣∑ℓ,mI2ℓ,m⎤⎦. (16)

Here, is the model image and is the reconstructed image. In cases where closure phases are used for image reconstructions, the image centroid is unconstrained and we report the MSE that is minimum over all shifts . The precise value of the MSE should not be taken too literally because it is sensitive to sharp features in the original image that the array cannot resolve (see also Gomes et al., 2017). Nevertheless, the MSE does tend to provide a crude characterization of reconstructed image quality.

### 5.2. Dynamical Imaging of a Steady Accretion Flow

To examine the potential capabilities of dynamical imaging with EHT data, we generated synthetic data from a face-on view of a 3D GRMHD simulation of an accretion flow onto Sgr A (b0-high from Shiokawa, 2013). Figure 4 shows image reconstructions using both snapshot imaging (see Figure 3) and dynamical imaging. These reconstructions using the regularizer with the distance metric and complex visibilities for the data product; we reconstructed 99 frames, each , for a total duration of 14.6 hours. At times with poor u-v coverage, the snapshot images are uninformative while frames from dynamical imaging are close to the estimated time-averaged image. Dynamical imaging successfully identifies the time-variable regions of enhanced flux density and can also identify the flow direction, which is not apparent in the snapshot reconstructions.

To quantify the improvement of dynamical imaging relative to snapshot imaging, Figure 5 shows the MSE as a function of time for these two reconstructions. Notably, the MSE for snapshot imaging changes significantly over the observation, increasing steeply when the number of sites with mutual visibility of Sgr A drops. In contrast, the MSE for dynamical imaging is lower overall and is steady, showing the increased resilience to limited data.

### 5.3. Estimates of Time-Averaged Images

Another important utility of dynamical imaging is to estimate the time-averaged image over an observation. For studies of Sgr A with the EHT, the time-averaged image is of intense interest because it may reveal distinctive features of the spacetime near a black hole such as the black hole “shadow” (Bardeen et al., 1972; Luminet, 1979; Falcke et al., 2000; Takahashi, 2004; Johannsen & Psaltis, 2010). Yet, as discussed in §2.4, imaging techniques that assume a static source can be severely affected by intrinsic source variability; conventional imaging will not simply provide an estimate of the time-averaged image. Instead, the variability must be integrated into the imaging procedure by either preprocessing the data to render it compatible with a static source assumption (Lu et al., 2016) or by modifying the imaging procedure to accommodate image variability, as we propose here.

To test this application of dynamical imaging, we used the simulated EHT data from Lu et al. (2016) for a GRMHD simulation of an accretion flow onto Sgr A. Note that, in contrast with our other examples, this dataset included the CARMA array (the CARMA observatory was shut down in 2015), it sampled the images with 16 GHz of bandwidth rather than 4 GHz, and it used slightly different SEFDs than are given in Table 1. Because the frame spacing is rather large in this example (3.7 minutes), we again used the regularizer with the distance metric.

Figure 6 compares the time-averaged estimates from dynamical imaging with the time-averaged simulated image. The estimated average image is comparable in quality to the image obtained with the scaling, averaging, and smoothing approach of Lu et al. (2016). We also found that averaging the snapshot images gives an estimated average image with comparable quality as these more sophisticated approaches, especially if periods with poor u-v coverage were downweighted or omitted. Thus, a weighted average of snapshot images, favoring times with superior u-v coverage, may also produce reliable estimates of the time-averaged image and will provide a useful comparison for these other approaches.

### 5.4. Dynamical Imaging of Flares

Another important application for dynamical imaging is to study flares of Sgr A via direct imaging. Figures 7 and 8 show example reconstructions for an orbiting “hot spot” near Sgr A (Broderick & Loeb, 2006), using simulated observations that span only 27 minutes. For these examples, we used regularization with the symmetrized KL divergence as the distance metric. While this simulated observation is too short to build up significant baseline coverage via Earth rotation to estimate an accurate time-averaged image, the reconstructions successfully identify the motion of the hot spot, especially if additional sites at Kitt Peak and CARMA are included. Thus, in the coming years, the EHT may be able to trace rapidly evolving structures and estimate orbital rotation curves from dynamical imaging, especially if the array continues to expand.

### 5.5. Comparison of Dynamical Imaging Methods

We next compare all three dynamical imaging strategies on simulated observations of an accretion flow viewed at an inclination of with respect to the black hole’s rotation axis. Apart from the viewing inclination, the simulation is identical to the one shown in Figure 4. We used 1400 simulated movie frames, corresponding to 4.3 hours of observations, starting at a GST of 23:00. To simplify the comparison between the three imaging methods and avoid discrepancies from poor convergence or image misalignment, we used full complex visibilities for dynamical imaging and the distance metric for each. Each reconstructed movie has 234 frames (1/6 the time resolution of the input movie).

Figure 9 compares the average image of the simulation with the averaged images from the three dynamical imaging reconstructions using the 2017 EHT configuration (for the reconstruction using regularization, the reconstructed motion field is also shown). Figure 10 performs the same comparison for reconstructions that also included sites at Kitt Peak and at the location of the CARMA array. Figure 11 shows the MSE over time for each reconstruction from both array configurations. Despite their differing assumptions about the underlying image variability, the three methods give results that are broadly consistent. With the additional EHT sites of the second example, the flow clearly identifies the correct direction of motion, although the estimated magnitude of the motion underestimates by a factor of several the estimated time-averaged optical flow of the simulated images (Liu et al., 2009). Thus, to recover precise details about the motion will likely require either more stringent dynamical imaging constraints (see, e.g., Bouman et al., 2017) or additional sites added to the EHT.

### 5.6. Dynamical Imaging of M87

As a final example, we used dynamical imaging on a series of 14 separate 43 GHz VLBA observations of M87 taken over a span of 70 days in 2008 as part of the M87 Movie Project (Walker et al., 2016, 2017). We reconstructed a series of 24 images spaced by 3 days; each observation was associated to the nearest image in time, and the remaining images had no data constraints (see §4). We used regularization with symmetrized Kullback-Leibler divergence as the distance metric and (see §3.1). We have found that this overall strategy works well for images with high dynamic range and irregular motion, without requiring fine tuning of the imaging parameters. We also utilized iterative self calibration, so that the dynamical imaging serves as both an imaging and calibration framework.

While detailed analysis of these results will be presented separately, Figure 12 shows four of the reconstructed frames with their corresponding static reconstructions over a short time interval (17 days). Even a moderately relativistic component, with an apparent transverse velocity of , would move by only over this entire interval. Dynamical imaging successfully finds a series of similar images, each of which is consistent with its respective, self-calibrated data. By eliminating faint spurious features, outward motion along the jet is more readily evident in the dynamical reconstructions. Moreover, this interval includes one epoch () for which the original data were adversely affected by poor weather and were not considered to be of adequate quality for inclusion in the final CLEAN dataset. Nevertheless, dynamical imaging is able to successfully link this period of inferior data to the higher quality nearby epochs so that the reconstructed image is not perceptibly degraded.

## 6. Summary

In summary, we have developed three regularizers that are suitable for dynamical imaging: , , and . favors continuity from frame-to-frame, favors frames that are small perturbations from the time-averaged image, and favors frames that approximately evolve according to a stationary flow. For each of these regularizers, we have derived analytic gradients with respect to the unknown image parameters, so each converges quickly with modest computational resources (e.g., all the reconstructions in this paper were performed on a personal computer). Each can be used with any choice of VLBI data product (e.g., complex visibilities, the bispectrum, or visibility amplitudes and closure phases) and any choice or combination of image regularization for individual frames (e.g., entropy, -norm, or total variation).

For dynamical imaging, the most significant challenge we have encountered is suitable convergence to the optimal reconstruction, especially when using a small array and only VLBI closure quantities. We have discussed a number of strategies to assist convergence, most involving iterative re-imaging with blurring, averaging, individual frame imaging, or other modifications at each stage. In Bouman et al. (2017), we develop an alternative approach to dynamical imaging that provides an analytic expression for the reconstructed images, lessening the problem of convergence, at the expense of restrictive constraints on the dynamical imaging framework. These methods can potentially be used in sequence to allow a flexible dynamical imaging strategy with reliable convergence.

A major motivation for this work is the possibility of imaging Sgr A with the EHT. Even in its simplest implementation, dynamical imaging provides a framework to estimate the time-averaged image of Sgr A and will be significantly more sensitive than static imaging approaches if there is significant intrinsic variability (see Figure 6). Dynamical imaging can also confirm key image features such as the black hole shadow based on their temporal signatures. For example, the region surrounding the shadow is expected to exhibit enhanced high-frequency variability (Shiokawa et al., 2017). Dynamical studies of Sgr A will be crucial for estimating accretion disk inclinations, breaking a degeneracy in time-averaged images from the near symmetry orthogonal to the rotation axis (see, e.g., Broderick et al., 2011; Johnson et al., 2015b), and they will also be helpful for estimating the black hole spin. Namely, while the shape of the black hole shadow is almost independent of spin (Bardeen et al., 1972; Takahashi, 2004; Johannsen & Psaltis, 2010), orbital periods at the innermost stable circular orbit vary by nearly a factor of 10 depending on spin (Bardeen, 1973).

For EHT imaging, regularization appears especially promising because inhomogeneous data can be combined regardless of their spacing, specific observing cadence, or participating telescopes. Thus, any robust data products from multiple epochs can be merged to produce an average image that is not adversely affected by the variability, while also estimating the time-dependent perturbations. Moreover, our approach can be combined with other regularization frameworks, for instance to simultaneously mitigate the effects of interstellar scattering (Johnson, 2016). Looking forward, dynamical imaging may be a rich source of continued study of Sgr A as millimeter VLBI continues to expand, potentially even to Earth-space baselines (see, e.g., Wild et al., 2009; Smirnov et al., 2012; Kardashev et al., 2014).

Our framework is also suitable for multi-epoch VLBI imaging studies, including kinematical studies of relativistic jets (e.g., Kellermann et al., 2004; Jorstad et al., 2005; Lister et al., 2009; Walker et al., 2016; Mertens & Lobanov, 2016; Lister et al., 2016), supernovae (e.g., Bartel et al., 2000; Bietenholz et al., 2003; Bartel, 2009), and microquasars (e.g., Fomalont et al., 2001; Mioduszewski et al., 2004; Jeffrey et al., 2016). It can also be applied to multi-epoch wide-field imaging, where highly sensitive instruments such as the Square Kilometer Array (SKA) are expected to detect a combination of static and variable sources (e.g., Metzger et al., 2015; Fender et al., 2015). For cases with regular motion, dynamical imaging with can self-consistently estimate the velocity field, while cases with irregular motion should be imaged with softer regularization, such as , (see Figure 12), or a non-stationary flow. A benefit of our approach is that epochs with poor u-v coverage or sensitivity can partially inherit the higher resolution of neighboring epochs – our approach does not assume a constant VLBI beam among the epochs or even a constant spacing between epochs (see, e.g., Figure 5). Our approach can also incorporate iterative self-calibration to derive a calibration solution that is compatible with smooth structural evolution among epochs. And while our focus has been on dynamical imaging of the total flux density, our methods are straightforward to adapt to full-Stokes polarization, which often shows more pronounced variability than the total flux density (e.g., Gabuzda et al., 2000).

We are particularly indebted to Craig Walker for sharing his calibrated M87 data and images and for many illuminating discussions. We gratefully acknowledge helpful conversations with John Wardle, James Guillochon, and Maciek Wielgus. We also thank Alan Marscher, Svetlana Jorstad, Dan Homan, and Matt Lister for making their multi-epoch VLBI studies available to help refine and test our code. We thank Avery Broderick for providing the hot spot simulations used in this work. We thank the referee for suggesting the application of this technique to wide-field imaging studies with next generation arrays. We thank the National Science Foundation (AST-1440254, AST-1614868) and the Gordon and Betty Moore Foundation (GBMF-3561, GBMF-5278) for financial support of this work. This work was supported in part by the Black Hole Initiative at Harvard University, which is supported by a grant from the John Templeton Foundation. F.R. is supported by the ERC Synergy Grant “BlackHoleCam” (Grant 610058). K.A. is financially supported by the program of Postdoctoral Fellowships for Research Abroad at the Japan Society for the Promotion of Science (JSPS).

## Appendix A Gradients of Dynamical Regularization Terms

We will now derive analytic expressions for the gradients of the dynamical regularization terms. These gradients depend on the chosen distance function, and so we will provide representative examples.

The gradient of when using the distance function is given by

 ∂RΔt∂Ik =2B2([Ik−Ik−1]δk>1+[Ik−Ik+1]δk

where denotes a blurring operator that is applied twice, and the indicator function is defined to be unity when the subscripted condition is satisfied and is zero otherwise. Note that could be replaced by any matrix operator (acting on an image vector), in which case in Eq. A1 must be replaced by .

Likewise, for the distance function,

 ∂RΔt∂Ik =pB(|B(Ik−Ik−1)|p−1sgn(B(Ik−Ik−1))δk>1+|B(Ik−Ik+1)|p−1sgn(B(Ik−Ik+1))δk

Lastly, for the distance function,

 ∂RΔt∂Ik =B([1+lnB(Ik)B(Ik−1)]δk>1−B(Ik+1)B(Ik)δk

where denotes a vector of length with every element equal to unity. Gradients for variants of the function can be computed similarly. We again emphasize that operations such as norms (), quotients, powers, and products of image vectors are to be computed elementwise.

For the regularization function, using the distance function gives the following gradient:

 ∂RΔI∂Ik =2(Ik−Iavg). (A4)

For the distance function, we obtain

 ∂RΔI∂Ik =p∣∣Ik−Iavg∣∣p−1sgn(Ik−Iavg)−pNtNt∑j=1∣∣Ij−Iavg∣∣p−1sgn(Ij−Iavg). (A5)

Note that the second term is independent of and is zero for .

Lastly, for the distance function, we find

 ∂RΔI∂Ik =1−IavgIk+1NtNt∑j=1ln(IavgIj). (A6)

For regularization, we must evaluate the gradients of with respect to both the images and the flow. The gradient with respect to the images can be written in a general form that only depends on the linear operator and its transpose. For instance, using the distance metric gives

 ∂Rflow∂Ik =2(Ik−\tiny↔Fflow⋅Ik−1)δk>1−2\tiny↔F⊺flow⋅(Ik+1−\tiny↔Fflow⋅Ik)δk

Using the identify that (appropriate for central finite difference operators), we obtain . The other elements of are diagonal matrices, so

 \tiny↔F⊺flow =(1−m⋅∇−∇⋅m)⊺ (A8) =1+(∇⋅m+m⋅∇)−∇⋅m =1+m⋅∇.

Substituting this result into Eq. A7, we obtain

 ∂Rflow∂Ik (A9)

Likewise, the gradient with respect to the flow vector field is given by

 ∂Rflow∂m =Nt−1∑j=1[2(Ij+1−\tiny↔FflowIj)∇Ij−∇(2(Ij+1−%$↔$Fflow⋅Ij)Ij)] (A10) ≈−2Nt−1∑j=1Ij∇(Ij+1−\tiny↔Fflow⋅Ij).

## References

• Akiyama et al. (2017a) Akiyama, K., Kuramochi, K., Ikeda, S., et al. 2017a, ApJ, 838, 1
• Akiyama et al. (2017b) Akiyama, K., Ikeda, S., Pleau, M., et al. 2017b, AJ, 153, 159
• Bardeen (1973) Bardeen, J. M. 1973, Les Astres Occlus, 215
• Bardeen et al. (1972) Bardeen, J. M., Press, W. H., & Teukolsky, S. A. 1972, ApJ, 178, 347
• Baron et al. (2010) Baron, F., Monnier, J. D., & Kloppenborg, B. 2010, in Proc. SPIE, Vol. 7734, Optical and Infrared Interferometry II, 77342I
• Bartel (2009) Bartel, N. 2009, in Astronomical Society of the Pacific Conference Series, Vol. 402, Approaching Micro-Arcsecond Resolution with VSOP-2: Astrophysics and Technologies, ed. Y. Hagiwara, E. Fomalont, M. Tsuboi, & M. Yasuhiro, 243
• Bartel et al. (2000) Bartel, N., Bietenholz, M. F., Rupen, M. P., et al. 2000, Science, 287, 112
• Bietenholz et al. (2003) Bietenholz, M. F., Bartel, N., & Rupen, M. P. 2003, ApJ, 597, 374
• Bouman et al. (2016) Bouman, K. L., Johnson, M. D., Zoran, D., et al. 2016, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• Bouman et al. (2017) Bouman, K. L., et al. 2017, in prep.
• Bower et al. (2006) Bower, G. C., Goss, W. M., Falcke, H., Backer, D. C., & Lithwick, Y. 2006, ApJ, 648, L127
• Brinkerink et al. (2015) Brinkerink, C. D., Falcke, H., Law, C. J., et al. 2015, A&A, 576, A41
• Broderick & Loeb (2005) Broderick, A. E., & Loeb, A. 2005, MNRAS, 363, 353
• Broderick & Loeb (2006) —. 2006, MNRAS, 367, 905
• Broderick et al. (2011) Broderick, A. E., Loeb, A., & Reid, M. J. 2011, ApJ, 735, 57
• Buscher (1994) Buscher, D. F. 1994, in IAU Symposium, Vol. 158, Very High Angular Resolution Imaging, ed. J. G. Robertson & W. J. Tango, 91
• Byrd et al. (1995) Byrd, R. H., Lu, P., Nocedal, J., & Zhu, C. 1995, SIAM Journal on Scientific Computing, 16, 1190
• Chael et al. (2016) Chael, A. A., Johnson, M. D., Narayan, R., et al. 2016, ApJ, 829, 11
• Cornwell & Evans (1985) Cornwell, T. J., & Evans, K. F. 1985, A&A, 143, 77
• Courant et al. (1967) Courant, R., Friedrichs, K., & Lewy, H. 1967, IBM Journal of Research and Development, 11, 215
• Dexter et al. (2010) Dexter, J., Agol, E., Fragile, P. C., & McKinney, J. C. 2010, ApJ, 717, 1092
• Doeleman et al. (2009a) Doeleman, S., Agol, E., Backer, D., et al. 2009a, in Astronomy, Vol. 2010, astro2010: The Astronomy and Astrophysics Decadal Survey
• Doeleman et al. (2009b) Doeleman, S. S., Fish, V. L., Broderick, A. E., Loeb, A., & Rogers, A. E. E. 2009b, ApJ, 695, 59
• Doeleman et al. (2008) Doeleman, S. S., Weintroub, J., Rogers, A. E. E., et al. 2008, Nature, 455, 78
• Eckart et al. (2006) Eckart, A., Schödel, R., Meyer, L., et al. 2006, A&A, 455, 1
• Falcke et al. (2000) Falcke, H., Melia, F., & Agol, E. 2000, ApJ, 528, L13
• Fender et al. (2015) Fender, R., Stewart, A., Macquart, J.-P., et al. 2015, ArXiv e-prints, arXiv:1507.00729
• Fish et al. (2016a) Fish, V., Akiyama, K., Bouman, K., et al. 2016a, Galaxies, 4, 54
• Fish et al. (2009) Fish, V. L., Doeleman, S. S., Broderick, A. E., Loeb, A., & Rogers, A. E. E. 2009, ApJ, 706, 1353
• Fish et al. (2011) Fish, V. L., Doeleman, S. S., Beaudoin, C., et al. 2011, ApJ, 727, L36
• Fish et al. (2014) Fish, V. L., Johnson, M. D., Lu, R.-S., et al. 2014, ApJ, 795, 134
• Fish et al. (2016b) Fish, V. L., Johnson, M. D., Doeleman, S. S., et al. 2016b, ApJ, 820, 90
• Fomalont et al. (2001) Fomalont, E. B., Geldzahler, B. J., & Bradshaw, C. F. 2001, ApJ, 558, 283
• Frieden (1972) Frieden, B. R. 1972, Journal of the Optical Society of America (1917-1983), 62, 511
• Gabuzda et al. (2000) Gabuzda, D. C., Kochenov, P. Y., Kollgaard, R. I., & Cawthorne, T. V. 2000, MNRAS, 315, 229
• Ghez et al. (2008) Ghez, A. M., Salim, S., Weinberg, N. N., et al. 2008, ApJ, 689, 1044
• Gillessen et al. (2009) Gillessen, S., Eisenhauer, F., Trippe, S., et al. 2009, ApJ, 692, 1075
• Gold et al. (2017) Gold, R., McKinney, J. C., Johnson, M. D., & Doeleman, S. S. 2017, ApJ, 837, 180
• Gomes et al. (2017) Gomes, N., Garcia, P. J. V., & Thiébaut, É. 2017, MNRAS, 465, 3823
• Högbom (1974) Högbom, J. A. 1974, A&AS, 15, 417
• Honma et al. (2014) Honma, M., Akiyama, K., Uemura, M., & Ikeda, S. 2014, PASJ, 66, 95
• Horn & Schunck (1981) Horn, B. K., & Schunck, B. G. 1981, Artificial intelligence, 17, 185
• Jeffrey et al. (2016) Jeffrey, R. M., Blundell, K. M., Trushkin, S. A., & Mioduszewski, A. J. 2016, MNRAS, 461, 312
• Johannsen & Psaltis (2010) Johannsen, T., & Psaltis, D. 2010, ApJ, 718, 446
• Johnson (2016) Johnson, M. D. 2016, ApJ, 833, 74
• Johnson et al. (2014) Johnson, M. D., Fish, V. L., Doeleman, S. S., et al. 2014, ApJ, 794, 150
• Johnson & Gwinn (2015) Johnson, M. D., & Gwinn, C. R. 2015, ApJ, 805, 180
• Johnson et al. (2015a) Johnson, M. D., Loeb, A., Shiokawa, H., Chael, A. A., & Doeleman, S. S. 2015a, ApJ, 813, 132
• Johnson et al. (2015b) Johnson, M. D., Fish, V. L., Doeleman, S. S., et al. 2015b, Science, 350, 1242
• Jones et al. (2001–) Jones, E., Oliphant, T., Peterson, P., et al. 2001–, SciPy: Open source scientific tools for Python, [Online; accessed 2016-03-26]
• Jorstad et al. (2005) Jorstad, S. G., Marscher, A. P., Lister, M. L., et al. 2005, AJ, 130, 1418
• Kardashev et al. (2014) Kardashev, N. S., Novikov, I. D., Lukash, V. N., et al. 2014, Physics Uspekhi, 57, 1199
• Kellermann & Moran (2001) Kellermann, K. I., & Moran, J. M. 2001, ARA&A, 39, 457
• Kellermann et al. (2004) Kellermann, K. I., Lister, M. L., Homan, D. C., et al. 2004, ApJ, 609, 539
• Kim et al. (2016) Kim, J., Marrone, D. P., Chan, C.-K., et al. 2016, ApJ, 832, 156
• Lister et al. (2009) Lister, M. L., Cohen, M. H., Homan, D. C., et al. 2009, AJ, 138, 1874
• Lister et al. (2016) Lister, M. L., Aller, M. F., Aller, H. D., et al. 2016, AJ, 152, 12
• Liu et al. (2009) Liu, C., et al. 2009, PhD thesis, Massachusetts Institute of Technology
• Lu et al. (2014) Lu, R.-S., Broderick, A. E., Baron, F., et al. 2014, ApJ, 788, 120
• Lu et al. (2011) Lu, R.-S., Krichbaum, T. P., Eckart, A., et al. 2011, A&A, 525, A76
• Lu et al. (2016) Lu, R.-S., Roelofs, F., Fish, V. L., et al. 2016, ApJ, 817, 173
• Luminet (1979) Luminet, J.-P. 1979, A&A, 75, 228
• Marrone et al. (2006) Marrone, D. P., Moran, J. M., Zhao, J.-H., & Rao, R. 2006, Journal of Physics Conference Series, 54, 354
• Marrone et al. (2008) Marrone, D. P., Baganoff, F. K., Morris, M. R., et al. 2008, ApJ, 682, 373
• Medeiros et al. (2017) Medeiros, L., Chan, C.-k., Özel, F., et al. 2017, ApJ, 844, 35
• Medeiros et al. (2016) Medeiros, L., Chan, C.-k., Ozel, F., et al. 2016, ArXiv e-prints, arXiv:1601.06799
• Mertens & Lobanov (2016) Mertens, F., & Lobanov, A. P. 2016, A&A, 587, A52
• Metzger et al. (2015) Metzger, B. D., Williams, P. K. G., & Berger, E. 2015, ApJ, 806, 224
• Mioduszewski et al. (2004) Mioduszewski, A. J., Rupen, M. P., Walker, R. C., Schillemat, K. M., & Taylor, G. B. 2004, in Bulletin of the American Astronomical Society, Vol. 36, AAS/High Energy Astrophysics Division #8, 967
• Narayan & Nityananda (1986) Narayan, R., & Nityananda, R. 1986, ARA&A, 24, 127
• Reid et al. (2014) Reid, M. J., McClintock, J. E., Steiner, J. F., et al. 2014, ApJ, 796, 2
• Roelofs et al. (2017) Roelofs, F., Johnson, M. D., Shiokawa, H., Doeleman, S. S., & Falcke, H. 2017, ApJ, 847, 55
• Ryle (1962) Ryle, M. 1962, Nature, 194, 517
• Sault et al. (1997) Sault, R. J., Oosterloo, T., Dulk, G. A., & Leblanc, Y. 1997, A&A, 324, 1190
• Shiokawa (2013) Shiokawa, H. 2013, PhD thesis, University of Illinois at Urbana-Champaign
• Shiokawa et al. (2017) Shiokawa, H., Gammie, C. F., & Doeleman, S. S. 2017, ApJ, 846, 29
• Smirnov et al. (2012) Smirnov, A. V., Baryshev, A. M., Pilipenko, S. V., et al. 2012, in Proc. SPIE, Vol. 8442, Space Telescopes and Instrumentation 2012: Optical, Infrared, and Millimeter Wave, 84424C
• Takahashi (2004) Takahashi, R. 2004, ApJ, 611, 996
• Thiébaut (2013) Thiébaut, É. 2013, in EAS Publications Series, Vol. 59, EAS Publications Series, ed. D. Mary, C. Theys, & C. Aime, 157–187
• Thompson et al. (2017) Thompson, A. R., Moran, J. M., & Swenson, Jr., G. W. 2017, Interferometry and Synthesis in Radio Astronomy, 3rd Edition, doi:10.1007/978-3-319-44431-4
• Viola & Wells III (1997) Viola, P., & Wells III, W. M. 1997, International journal of computer vision, 24, 137
• Walker et al. (2017) Walker, R. C., Hardee, P. E., Davies, F., Ly, C., & Junor, W. 2017, in prep.
• Walker et al. (2016) Walker, R. C., Hardee, P. E., Davies, F., et al. 2016, Galaxies, 4, 46
• Wells et al. (1996) Wells, W. M., Viola, P., Atsumi, H., Nakajima, S., & Kikinis, R. 1996, Medical image analysis, 1, 35
• Wild et al. (2009) Wild, W., Kardashev, N. S., Likhachev, S. F., et al. 2009, Experimental Astronomy, 23, 221
• Yusef-Zadeh et al. (2009) Yusef-Zadeh, F., Bushouse, H., Wardle, M., et al. 2009, ApJ, 706, 348
• Zamaninasab et al. (2010) Zamaninasab, M., Eckart, A., Witzel, G., et al. 2010, A&A, 510, A3
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters