Phase-error estimation and image reconstruction from digital-holography data using a Bayesian framework

Phase-error estimation and image reconstruction from digital-holography data using a Bayesian framework

Casey J. Pellizzari Mark F. Spencer Air Force Research Laboratory, Directed Energy Directorate, KAFB, NM, 87111 Charles A. Bouman School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907

The estimation of phase errors from digital-holography data is critical for applications such as imaging or wave-front sensing. Conventional techniques require multiple i.i.d. data and perform poorly in the presence of high noise or large phase errors. In this paper we propose a method to estimate isoplanatic phase errors from a single data realization. We develop a model-based iterative reconstruction algorithm which computes the maximum a posteriori estimate of the phase and the speckle-free object reflectance. Using simulated data, we show that the algorithm is robust against high noise and strong phase errors.



1 Introduction

Digital holography (DH) uses coherent-laser illumination and detection to gain significant improvements in sensitivity for remote-sensing applications. Detection involves measuring the modulation of a strong reference field by a potentially-weak signal field. This modulation allows for the detection of signals with energies equivalent to a single photon or less [1].

In practice, DH systems are sensitive to phase errors imparted by atmospheric perturbations or flaws in the optical system. For imaging applications, we must estimate and remove these phase errors to form a focused image [2, 3, 4, 5, 6, 7, 8, 9, 10]. Similarly, for wave-front sensing applications, estimation of the phase errors represents the desired sensor output [11, 12, 13, 14]. In either case, accurately estimating the phase errors is a necessary and critical task.

State-of-the-art techniques for estimating phase errors from DH data involve maximizing an image sharpness metric with respect to the phase errors [3, 4, 5, 7, 8, 9, 10]. Typically, multiple data realizations are obtained for which the image-to-image speckle variation is decorrelated but the phase errors are identical. For applications using pupil-plane detection, the data realizations are corrected using an initial estimate of the phase errors then inverted using a Fast Fourier Transform (FFT). The inversion produces estimates of the complex-valued reflection coefficient, , which are incoherently averaged to reduce speckle variations. The sharpness of this speckle-averaged image is then maximized to obtain the final phase-error estimate.

Image sharpening (IS) algorithms are designed to use multiple data realizations. However, Thurman and Fienup demonstrated that in favorable conditions, the algorithms can still be used when only one realization is available [3]. For a high signal-to-noise ratio (SNR), with several hundred detected photons per detector element, their algorithm was able to estimate atmospheric phase functions with residual root-mean-square (RMS) errors as low as .

While IS algorithms have been successfully demonstrated for estimating both isoplanatic and anisoplanatic phase errors, there remains room for improvement. First, IS algorithms use relatively simple inversion techniques to reconstruct the magnitude squared of the reflection coefficient, , rather than the real-valued reflectance, r. The reflectance, given by , where indicates the expected value, is a smoother quantity with higher spatial correlation between elements in the image. We are accustomed to seeing the reflectance in conventional images and it is of greater interest for many imaging applications. Conversely, reconstructing , leads to images with high-spatial-frequency variations known as speckle. By reconstructing , we can leverage its higher spatial correlation to better constrain the estimation process and potentially produce more accurate estimates of the phase errors [15].

Another limitation of IS algorithms is that the process of estimating the phase errors is not tightly coupled to image reconstruction. Information about the object can be further leveraged during the phase-estimation process. For example, this information can help rule out contributions to the phase caused by noise. By jointly estimating both the phase errors and the reflectance, we can incorporate additional information into our estimates which helps reduce uncertainty [15].

Lastly, in many practical applications it may not be possible to obtain multiple data realizations. Particularly, in cases where there is not enough relative movement between the object and receiver to decorrelate the speckle variations between images, the phase errors are rapidly changing, or when timing requirements prohibit multiple images from being obtained. Without multiple realizations, IS algorithms can struggle to produce accurate phase error estimates [3] and an alternate method is needed.

In this paper, we propose an improved method of DH phase recovery based on a framework of joint image reconstruction and phase-error estimation. Our approach builds on the work of [15] by reconstructing the spatially-correlated reflectance, , rather than the reflection coefficient, , which allows for better estimates of the phase errors from less data. We focus on reconstructing from a single data realization under isoplanatic atmospheric conditions for the off-axis pupil plane recording geometry (PPRG) [13]. Our major contributions include:

  1. We jointly compute the maximum a posteriori (MAP) estimates of the image reflectance, , and phase errors, from a single data realization. Joint estimation reduces uncertainty in both quantities. Additionally, estimating rather than helps to further constrain the problem and produce images without speckle.

  2. We derive the forward model for a DH system using the off-axis PPRG. The model ensures our estimates are consistent with the physics and statistics of the remote sensing scenario.

  3. We develop a technique to compute the 2D phase errors that occur in DH. Our approach allows for the estimation of phase errors on a lower resolution grid to help reduce the number of unknowns.

  4. We model the phase errors as a random variable using a Gaussian Markov Random Field (GMRF) prior model. This approach allows us to compute the MAP estimate of the phase errors which constrains the estimate to overcome high noise and strong turbulence.

  5. We compare the proposed algorithm to the image sharpening approach in [3] over a range of SNRs and atmospheric turbulence strengths.

Overall, our experimental results using simulated, single-frame, isoplanatic data demonstrate that the proposed MBIR algorithm can reconstruct substantially higher-quality phase and image reconstructions compared to IS algorithms.

2 Estimation Framework

Figure 1 shows an example scenario for a DH system using the off-axis PPRG. Our goal is to compute the MAP estimates of the reflectance, , and the phase errors, , from the noisy data, . Note that we are using vectorized notation for these 2D variables. The joint estimates are given by


where represents the jointly-feasible set and the quantities and are assumed to be independent.

Figure 1: Example DH system using an off-axis pupil plane recording geometry. A coherent source is used to flood illuminate an object which has a reflectance function, , and corresponding reflection coefficient, . The return signal is corrupted by atmospheric phase errors, , and passes through the entrance pupil to the imaging system, . Both and are considered to be located in the same plane. The entrance-pupil plane is then imaged onto a focal-plane array where it is mixed with a reference field. For simplicity, only the entrance and exit pupils of the optical system are shown. Finally, the noisy data, , (with noise power ) is processed to form an image and/or an estimate of .

To evaluate the cost function for Eq. (1), we must derive the model for . The reader is referred to App. A for a more detailed derivation. In short, we can represent the data using an additive noise model given by


where is the measurement noise, is the linear forward model operator, and is the complex field in the object plane. For an object with reflectance function , is defined as


where is a diagonal matrix that applies the object-plane, quadratic phase factor from the Fresnel propagation integral [16]. Note that , and therefore for all . By representing the data as a function of , rather than , we avoid having to explicitly define in our model.

In Eq. (2), the matrix accounts for the propagation and measurement geometry. If we ignore the blurring effects caused by sampling the signal with finite-sized pixels, can be decomposed as


Here denotes an operator that produces a diagonal matrix from its vector argument, is the entrance-pupil transmission function, and is the phase error function previously defined. For our purposes, is a binary circular function defining the entrance pupil of the imaging system, as shown in Fig 1. Finally, we choose the reconstruction parameters such that is a two-dimensional discrete Fourier transform (DFT) matrix scaled so that .

For surfaces which are rough relative to the illumination wavelength, the vector can be modeled as a conditionally complex Gaussian random variable. Given the underlying reflectance, , of the scene, the conditional distribution is given by


where for any random vector , is the multivariate complex normal distribution with mean, , and covariance matrix, , such that


Here, superscript indicates the Hermitian transpose. The vector, , then has an identical distribution given by


The equality in Eq. (7) results from the commutative property of diagonal matrices and from the fact that .

It is common for coherent detection systems to use a reference beam which is much stronger than the return signal. For such cases, shot noise, driven by the power of the reference beam, is the dominate source of measurement noise and can be modeled as additive, zero-mean, complex Gaussian white noise [17, 18]. The distribution of is therefore given by


where is the noise variance.

From Eqs. 2-8, the likelihood function of the data, given the reflectance and the phase errors, is distributed according to [19]


Therefore, the MAP cost function is given by


Unfortunately, the determinate and inverse make direct optimization of Eq. (10) an extremely computationally expensive task.

Equation (10) is similar to the MAP cost function in [15] except that the forward model has changed and we include the distribution of in the cost function. Despite these differences, we use a similar optimization approach which leverages the expectation maximization (EM) algorithm and allows us to replace Eq. (10) with a more-tractable surrogate function.

In order to use the EM algorithm, we need to first introduce the concept of surrogate functions. For any function , we define a surrogate function, , to be an upper-bounding function, such that , where is the current value of which determines the functional form of and is a constant that ensures the two functions are equal at . Surrogate functions have the property that minimization of implies minimization of . That is


Surrogate functions are useful because in some cases it is possible to find a simple surrogate function, , that we can iteratively minimize in place of a more difficult to compute function, . In fact, the EM algorithm provides a formal framework for constructing such a surrogate function. More specifically, for our problem the surrogate function of the EM algorithm takes the following form:


where and are the current estimates of and , respectively. Here we choose to be the missing data. Evaluation of the expectation in Eq. (12) constitutes the E-step of the EM algorithm. Fig. 2 shows the alternating minimization approach used for implementing the M-step of the EM algorithm. The proposed algorithm shares the same convergence and stability properties as the algorithm in [15]. More specifically, the EM algorithm converges in a stable manner to a local minima.

Since the cost function is nonconvex, the result of the EM iterations will depend on the initial conditions and may not be the global minimum. While it is not generally possible to compute a guaranteed global minima of the cost function in Section 4.C, we present a multi-start algorithm that we empirically found to robustly compute good local minima.

Figure 2: EM algorithm for joint optimization of the MAP cost surrogate function

3 Prior Models

In this section, we present the distributions used to model both the reflectance, , and the phase errors, . In both cases, we choose to use Markov random fields (MRFs) with the form


where is the partition function, is the weight between neighboring pixel pairs ( and , or and ), is the difference between those pixel values, is the set of all pair-wise cliques falling within the same neighborhood, and is the potential function [20]. We obtain different models for the reflectance and phase errors by choosing different potential functions.

3.1 Reflectance

For the reflectance function, , we used a Q-Generalized Gaussian Markov Random Field (QGGMRF) potential function with the form


where is a unitless threshold value which controls the transition of the potential function from having the exponent to having the exponent  [21]. The variable, , controls the variation in . The parameters of the QGGMRF potential function affect its shape, and therefore the influence neighboring pixels have on one another.

While more sophisticated priors exist for coherent imaging [22], the use of a QGGMRF prior is relatively simple and computationally inexpensive, yet effective [15]. The low computational burden is important for DH applications where we desire near-real-time, phase-error estimation, such as with wave-front sensing.

3.2 Phase Errors

In practice, the phase-error function may not change much from sample to sample, or we may wish to correct the phase at a defined real-time frame rate using a deformable mirror with fewer actuators than sensor measurements. Therefore, to reduce the number of unknowns, we allow the phase-error function, , to be modeled on a grid that has lower resolution than the measured data. We denote the subsampled function as .

Using allows us to estimate only a single phase value for multiple data samples. This technique reduces the number of unknowns when estimating and from to , where is the factor of subsampling used in both dimensions. To scale to the resolution of for use in the forward model, we use a nearest-neighbor-interpolation scheme given by


Here, is an interpolation matrix with elements in the set .

To model the distribution of unwrapped phase error values in , we use a Gaussian potential function given by


where controls the variation in . The Gaussian potential function enforces a strong influence between neighboring samples which generates a more-smoothly varying output when compared to QGGMRF. Since the unwrapped phase function will not have edges or other sharp discontinuities, the GMRF is more appropriate than QGGMRF in this case. An added benefit is that it is less computationally expensive than the QGGMRF model which allows for fast evaluation.

4 Algorithm

In this section, we present the details for how we execute the proposed EM algorithm shown in Fig. 2. The discussion describes how we evaluate the surrogate function in the E-step and how we optimize it in the M-step.

4.1 E-step - Surrogate Function Evaluation

Using the models given by Eqs. (7-9) and (13-16), along with the conditional posterior distribution of given and , we can evaluate the surrogate function of Eq. (12). Its final form is given by


Here, indicates that the matrix is dependent on , is the element of the posterior mean given by


and is the diagonal element of the posterior covariance matrix given by


We simplify the evaluation of Eq. (17) by approximating the posterior covariance as


which requires that


For a square entrance-pupil function filling the detector’s field of view, and the approximations of Eqs. (20) and  (21) become exact as both and are unitary matrices. When represents a circular aperture, then the relationship of Eq. (21) is approximate. However, since it dramatically simplifies computations, we will use this approximation and the resulting approximation of Eq. (20) in all our simulations when computing the function for the EM update. In practice, we have found this to work well.

4.2 M-step - Optimization of the Surrogate Function

The goal of the M-step is to minimize the surrogate function according to


The alternating optimization shown in Fig. 2 is used to minimize with respect to and during each iteration of the EM algorithm.

We use Iterative Coordinate Descent (ICD) to update the reflectance [20]. In particular, ICD works by sequentially minimizing the entire cost function with respect to a single pixel, . By considering just the terms in Eq. (55) which depend on a pixel , the cost function is given by


where the sum over indicates a sum over all neighbors of the pixel . We carry out the minimization of Eq. (23) with a 1D line search over .

To minimize Eq. (17) with respect to the phase-errors, we consider just the terms which depend on . The phase-error cost function becomes


We use ICD to sequentially minimize the cost with respect to each element of the subsampled phase-error function, . For element , corresponding to a group of data samples with indices in the set , the cost function becomes




and is an index over neighboring phase samples on the lower resolution grid. We minimize Eq. (25) using a 1D line search over , where minimizes the prior term. By minimizing Eq. (25), we obtain an estimate of the unwrapped phase errors. Finally, we obtain the full-resolution estimate of the unwrapped phase errors, , according to Eq. (15).

4.3 Initialization and Stopping Criteria

Figure 3 summarizes the steps of the EM algorithm. To determine when to stop the algorithm, we can use either a set number of iterations, , or a metric such as


where is the iteration index and we stop the algorithm when falls below a threshold value of .


EM {
while  or  do
     for all  do
     for all  do


Figure 3: EM algorithm for the MAP estimates of and . Here, is the set of all pixels and is the set of all phase-error elements.

The EM algorithm in Fig. 3 is initialized according to


where indicates the element-wise magnitude square of a vector, is a unitless parameter introduced to tune the amount of regularization in , and computes the sample variance of a vector’s elements [23].

We use a heuristic similar to that used in [15] to iteratively initialize the phase-error estimate. Fig. 4 details the steps of this iterative process. The initial estimate of the phase-error vector is simply . Next, for a set number of outer-loop iterations, , we allow the EM algorithm to run for iterations. At the beginning of each outer-loop iteration, we reinitialize according to Eq. (28).

After the outer loop runs times, we again reinitialize according to Eq. (28) and run the EM algorithm until it reaches the stopping threshold . A Gaussian prior model for works best in the outer loop for the initialization of . Specifically, we use , , , , and , where indicates a Gaussian kernel with standard deviation, . Once the initialization process is complete, we can use different prior-model parameters for the actual reconstruction.


MBIR Algorithm {
for  do


Figure 4: Algorithm which initializes and runs the EM algorithm. An iterative process is used to initialize the phase error vector .

5 Methods

To compare the proposed algorithm to an IS algorithm from [3], we generated simulated data using an approach similar to that of [3]. Figure 5 (a) shows the reflectance function that we multiplied by an incident power, , to get the object intensity, . Figure 5 (b) shows the magnitude squared of the corresponding reflection coefficient generated according to .

In accordance with the geometry shown in Fig. 1, we used a fast Fourier transform (FFT) to propagate the field, , from the object plane to the entrance-pupil plane of the DH system. The field was padded to prior to propagation. Next, we applied a phase error, and the entrance pupil transmission function, , shown in Figure 5 (c). Figure. 5 (d) is an example of the intensity of the field passing through the entrance pupil which has a diameter, , equal to the grid length.

Figure 5: (a) reflectance function used for generating simulated data, (b) intensity of the reflected field in object plane, , (c) entrance-pupil transmission function, , and (d) intensity of the field passing through the entrance-pupil.

To generate the atmospheric phase errors, we used an FFT-based technique described in [24]. Using a Kolmogorov model for the refractive-index power-spectral density (PSD), we generated random coefficients for the spatial-frequency components of the atmospheric phase, then we used an inverse FFT to transform to the spatial domain. To set the turbulence strength, we parameterized the Kolmogorov PSD by the coherence length, , also known as the Fried parameter [25, 24]. The value of specifies the degree to which the phase of a plane-wave passing through the turbulent medium is correlated. Two points in the phase function which are separated by a distance greater than will typically be uncorrelated. In this paper, we report turbulence strength using the ratio which is related to the degrees of freedom in the atmospheric phase. We simulated data for values of and . Figure 6 shows examples of the wrapped phase errors for each case.

Figure 6: Atmospheric phase errors for values of (a) 10, (b) 20, (c) 30, and (d) 40. Values shown are wrapped to

After we added phase errors to the propagated field and applied the aperture function shown in Fig. 5, we mixed the signal with a modulated reference beam and detected the resultant power. Following [3], the reference-beam power was set at approximately 80% of the well depth per detector element, i.e., 80% of . We modeled Gaussian read noise with a standard deviation of and digitized the output to . After detection, we demodulated the signal to remove the spatial-frequency offset from the reference beam, low-pass filtered to isolate the signal of interest, and decimated to obtain a data array.111It is typical for this process to be carried out by taking an FFT, windowing a small region around the desired signal spectrum, and taking an inverse FFT. The resulting data was represented by Eq. (2) after vectorization.

We generated data over a range of SNRs which we define as


where is the sample-variance operator used in Eq. (28). For optically-coherent systems, SNR is well approximated by the average number of detected signal photons per pixel [17, 26]. At each turbulence strength, and at each SNR, we generated 18 i.i.d. realizations of the data. We then processed each i.i.d. realization independently and computed the average performance over the 18 independent cases.

To measure the distortion between the reconstructed images, , and the simulation input, , we used normalized root mean square error (NRMSE) given by




is the least-squares fit for any multiplicative offset between and .

To measure distortion between the reconstructed phase error, , and the actual phase error, , we calculated the Strehl ratio according to


where is the FFT operator and indicates that we take the maximum value of the argument. The function, , is a binary function that represents the aperture in the observation plane. It takes on the value inside the white dotted circle shown in Fig. 5 and outside.

Digital phase correction of DH data is analogous to using a piston-only, segmented deformable mirror (DM) in adaptive optics. Therefore, using the DM fitting error from [27] and Maréchal’s approximation, we can compute the theoretical limit of the Strehl ratio as


where is the spacing between correction points in .

Using NRMSE and Strehl ratio, we compared performance of the proposed algorithm to the IS approach presented in [3] using a point-by-point estimate of and the sharpness metric. The algorithm computes the phase-error estimate according to


where indicates the application of an exponent to each element. Following the process described in [3], we used 20 iterations of conjugate gradient to optimize Eq. (34) and the algorithm was initialized with a order Zernike polynomial found through an iterative process.

For the proposed MBIR algorithm, we allowed the outer initialization loop to run times, with EM iterations each time. We kept constant for all reconstructions. Once the iterative initialization process was complete, we set a stopping criteria of and let the EM algorithm run to completion. We used , , , , and as the QGGMRF prior parameters for image reconstruction. Additionally, we used , and for the phase error prior parameters. Using gives a total number of unknowns of . While varying with turbulence strength may produce more optimal results, we kept it fixed for simplicity. We found these parameters to work well over a wide range of conditions.

6 Experimental Results

Figure 7 shows example reconstructions for a subset of the results. Each block of images shows the reconstructions corresponding to the median Strehl ratio of the 18 i.i.d. data sets. Note that we only show five of the 20 SNR levels for each turbulence strength. The top row of each image block shows the original blurry images, the middle shows the IS reconstructions, and the bottom shows the MBIR reconstruction. To compress the large dynamic range that occurs in coherent imagery, we present the results using a log-based dB scale given by , where is the normalized reflectance function. The residual phase errors, wrapped to , are shown below each image block. To aid in viewing the higher-order residual phase errors in Fig. 7, we removed the constant and linear phase components, piston, tip, and tilt. The values were estimated according to


where is an operator that wraps its argument to , is the residual phase error, is a constant, and are the linear components in the and dimensions, respectively.

The examples in Figure 7 show that the IS algorithm is able to correct most of the phase errors for and some of the errors at . For and , the IS images are blurred beyond recognition. In contrast, the proposed MBIR algorithm produced focused images for all but the lowest SNR reconstructions. In addition, we see how the MBIR reconstruction has significantly less speckle variation. The MBIR results also show that in many cases, the residual-phase errors contain large-scale patches separated by wrapping cuts. However, as Fig. 7 shows, the patches are approximately modulo- equivalent and still produce focused images.

Figure 8 shows the resulting Strehl ratios and NRMSE for each algorithm as a function of SNR and . The curves in Fig. 8 show the average results for all 18 i.i.d. realizations along with error bars which span the standard deviation. We also plot the Strehl ratio limits given by Eq. (33) where we used for the IS algorithm and for the MBIR algorithm.

The results show that the IS algorithm works in a limited range of conditions for single-shot data. For , IS peaks at a Strehl ratio of when the SNR exceeds . For lower SNRs or stronger turbulence, the IS algorithm’s performance tapers off quickly. Conversely, the proposed MBIR algorithm is able to obtain Strehl ratios much higher than IS, even for low SNRs and strong turbulence. In the case, MBIR reaches the IS Strehl limit of 0.8 with about less SNR.

Figure 7: Example images and residual phase errors for of 10 (a), 20 (b), 30 (c), and 40 (d). The top row of each image is the original blurry image. These examples represent the reconstructions with the median Strehl ratio at the chosen SNRs. Note that we only show five of the 20 SNR values. The images are shown using a log-based dB scale and the residual phase errors are wrapped to .
Figure 8: Strehl ratio (top row) and reconstruction error (bottom row) vs SNR for of 10 (a-b), 20 (c-d), 30 (e-f), and 40 (g-h). The dashed curves shows the IS results and the solid curves show the MBIR results. The horizontal lines in the top row show the corresponding Strehl ratio limit given by Eq. (33).

7 Conclusion

In this paper, we presented an inverse, model-based approach to estimating phase errors and reconstructing images from digital holography (DH) data. We designed the algorithm for cases where only a single data realization is available and the phase errors are isoplanatic. Rather than estimating the spatially uncorrelated reflection coefficient, , we estimate the highly-correlated reflectance function, . This allows us to better constrain the underdetermined system and produce speckle-reduced images and accurate phase-error estimates with less data. Using first principals, we derived a discrete forward model for use in the MAP cost function. To obtain a more-tractable surrogate function, we used the EM algorithm. Additionally, we introduced a GMRF prior model for the phase error function modeled on a lower resolution grid and presented optimization schemes for the joint estimates.

We compared the proposed algorithm to a leading image sharpness approach over a range of conditions. The results showed that the MBIR algorithm produced phase estimates with higher Strehl ratios and lower image distortion than the IS technique. For cases of high noise and large phase errors, IS was not effective, while MBIR was still able to produce accurate estimates. In conclusion, we showed that the proposed algorithm is an effective alternative to IS algorithms for estimating phase errors from single shot DH data and reconstructing images.

Appendix A: Forward Model Derivation

In this section we derive the linear forward model expressed by Eq. (2). Following the scenario depicted in Fig. 1, we derive a continuous model for the data, then a discrete representation.

Assume that we use a monochromatic plane wave to illuminate an object which has a complex-valued reflection coefficient, . We denote continuous functions using tildes. The return field passing through the entrance pupil can be represented using the Fresnel diffraction integral [16] given by


where is a complex constant, represents the circular transmission function for the system’s entrance pupil, is the atmospheric phase-error function assumed to be concentrated in a single layer in the entrance-pupil plane (i.e. an isoplanatic phase error), is the wave number, and is the wavelength.

To simplify notation, we define the function


and its corresponding continuous space Fourier transform (CSFT) given by


We can also combine the pupil-plane quadratic phase function of Eq. (36) with the atmospheric phase errors to get a single unknown phase term given by


Given the definitions in Eqs. (37) through (39), the return signal of Eq. (36) can be written as


Optical heterodyne detection is performed by mixing the returned signal with a local reference beam given by


where is the amplitude and , are factors which control the spatial-frequency modulation. We combine the signal and reference onto the detector which measures the intensity given by


where indicates the complex conjugate. We assume the system is shot-noise limited with noise , driven by the power of the reference beam [18]. Thus, we model as additive, zero-mean, white Gaussian noise [17]. After detection we demodulate the detector output to remove the spatial-frequency offset of the reference, low-pass filter, and decimate the data. This isolates the last term of Eq. (42) which gives us our signal of interest 222In practice, the output of the detection process is only proportional to the right-hand side of Eq. (43) by some unknown constant.,


Equation (43) is continuous in both the and coordinate systems. However, we wish to represent the signal as discrete measurements, , generated from a discrete-space signal, . We start by representing the discrete field in the object plane as


where and are the spatial-sampling periods in the object plane. Furthermore, if we sample the signal with a focal-plane array and ignore the blurring effects of finite-sized pixels, then the discrete measurements can be represented as


where and are the spatial-sampling periods in the measurement plane. Combining Eqs. (43)-(45), we get


where , , and are discrete versions of , , and , respectively, and


Equation (46) can represented more compactly using matrix-vector notation as


where is the vectorized measurement noise and is the vectorized field in the object plane. The matrix can be decomposed as


Here denotes an operator that produces a diagonal matrix from its vector argument, is the vectorized entrance-pupil transmission function, and is the vectorized phase error function. The matrix is defined in Eq. (47).

Since the sum in Eq. (46) represents the forward propagation of the the field , we have scaled by so that . This ensures we conserve energy when propagating between the object and entrance-pupil planes. Furthermore, we choose our reconstruction parameters such that and , where and are the grid sizes in the and dimensions, respectively. Thus, is exactly a Discrete Fourier Transform (DFT) kernel and can be efficiently implemented using a Fast Fourier Transform (FFT).

Appendix B: Derivation of the EM Surrogate Function

In this section, we derive the EM surrogate for the MAP cost function. We start by writing Eq. (12) as


where we have used Bayes’ theorem inside the expectation and the fact that . Next, we substitute in the forward and prior models specified in sections 2 and 3. This gives


where indicates the matrix is dependent on and the variable is a constant with respect to and .

To evaluate the expectations in Eq. (51), we must specify the conditional posterior distribution of . Using Bayes’ theorem,


where is the partition function which absorbs any exponential terms that are constant with respect to . By completing the square, we can show that the posterior distribution is a complex Gaussian with mean


and covariance


Using the posterior distribution specified by Eq. (52), we can evaluate the expectations in Eq. (51) to get the final form of our EM surrogate function given by


where is the element of the posterior mean and is the diagonal element of the posterior covariance.


We would like to thank Dr. Samuel T. Thurman for his helpful discussions and support of this work. In addition, we would also like to thank the Maui High Performance Computing Center (MHPCC) for their assistance and resources which allowed us to test the proposed algorithm over a wide range of conditions.


  • [1] P. J. Winzer and W. R. Leeb, “Coherent LIDAR at low signal powers: basic considerations on optical heterodyning,” Journal of modern Optics 45, 1549–1555 (1998).
  • [2] J. C. Marron and K. S. Schroeder, “Holographic laser radar,” Optics Letters 18, 385–387 (1993).
  • [3] S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” JOSA A 25, 983–994 (2008).
  • [4] S. T. Thurman and J. R. Fienup, “Correction of anisoplanatic phase errors in digital holography,” JOSA A 25, 995–999 (2008).
  • [5] J. C. Marron, R. L. Kendrick, N. Seldomridge, T. D. Grow, and T. A. Höft, “Atmospheric turbulence correction using digital holographic detection: experimental results,” Optics express 17, 11638–11651 (2009).
  • [6] J. Marron, R. Kendrick, S. Thurman, N. Seldomridge, T. Grow, C. Embry, and A. Bratcher, “Extended-range digital holographic imaging,” in “SPIE Defense, Security, and Sensing,” (International Society for Optics and Photonics, 2010), pp. 76841J–76841J.
  • [7] A. E. Tippie and J. R. Fienup, “Phase-error correction for multiple planes using a sharpness metric,” Optics letters 34, 701–703 (2009).
  • [8] A. E. Tippie and J. R. Fienup, “Multiple-plane anisoplanatic phase correction in a laboratory digital holography experiment,” Optics letters 35, 3291–3293 (2010).
  • [9] A. E. Tippie, “Aberration correction in digital holography,” Ph.D. thesis, University of Rochester (2012).
  • [10] J. R. Fienup, “Phase error correction in digital holographic imaging,” in “Digital Holography and Three-Dimensional Imaging,” (Optical Society of America, 2014), pp. DM1B–1.
  • [11] R. A. Muller and A. Buffington, “Real-time correction of atmospherically degraded telescope images through image sharpening,” JOSA 64, 1200–1210 (1974).
  • [12] M. F. Spencer, I. V. Dragulin, D. S. Cargill, and M. J. Steinbock, “Digital holography wave-front sensing in the presence of strong atmospheric turbulence and thermal blooming,” in “SPIE Optical Engineering+ Applications,” (International Society for Optics and Photonics, 2015), pp. 961705–961705.
  • [13] M. T. Banet, M. F. Spencer, R. A. Raynor, and D. K. Marker, “Digital holography wavefront sensing in the pupil-plane recording geometry for distributed-volume atmospheric aberrations,” in “SPIE Optical Engineering+ Applications,” (International Society for Optics and Photonics, 2016), pp. 998208–998208.
  • [14] M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Optical Engineering 56, 031213–031213 (2017).
  • [15] C. Pellizzari, R. Trahan III, H. Zhou, S. Williams, S. Williams, B. Nemati, M. Shao, and C. A. Bouman, “Synthetic aperture LADAR: A model based approach,” Submitted to IEEE Transactions on Computational Imaging (TBD).
  • [16] J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, Englewood Colorado, 2005).
  • [17] R. L. Lucke and L. J. Rickard, “Photon-limited synthetic aperture imaging for planet surface studies,” Applied Optics 41, 5,084–5,095 (2002).
  • [18] V. V. Protopopov, Laser heterodyning, vol. 149 (Springer, 2009).
  • [19] S. M. Kay, Fundamentals of Statistical Signal Processing, Detection Theory, vol. II (Prentice Hall Signal Processing Series, 1993).
  • [20] C. A. Bouman, Model Based Image Processing (Unpublished, 2015).
  • [21] J. B. Thibault, K. Sauer, C. Bouman, and J. Hsieh, “A three-dimensional statistical approach to improved image quality for multi-slice helical ct.” Medical Physics 34, 4526–4544 (2007).
  • [22] C. Pellizzari, R. Trahan III, H. Zhou, S. Williams, S. Williams, B. Nemati, M. Shao, and C. A. Bouman, “Optically coherent image formation and denoising using plug and play inversion framework,” JOSA (2017).
  • [23] C. Forbes, M. Evans, N. Hastings, and B. Peacock, Statistical distributions (John Wiley & Sons, 2011).
  • [24] J. D. Schmidt, “Numerical simulation of optical wave propagation with examples in matlab,” in “Numerical simulation of optical wave propagation with examples in MATLAB,” (SPIE Bellingham, WA, 2010).
  • [25] L. C. Andrews and R. L. Phillips, Laser beam propagation through random media, vol. 52 (SPIE press Bellingham, WA, 2005).
  • [26] H. A. Haus, Electromagnetic noise and quantum optical measurements (Springer Science & Business Media, 2012).
  • [27] J. W. Hardy, Adaptive optics for astronomical telescopes (Oxford University Press on Demand, 1998).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description