Inverse Problems, Deep Learning, and Symmetry Breaking
Abstract
In many physical systems, inputs related by intrinsic system symmetries are mapped to the same output. When inverting such systems, i.e., solving the associated inverse problems, there is no unique solution. This causes fundamental difficulties for deploying the emerging endtoend deep learning approach. Using the generalized phase retrieval problem as an illustrative example, we show that careful symmetry breaking on the training data can help get rid of the difficulties and significantly improve the learning performance. We also extract and highlight the underlying mathematical principle of the proposed solution, which is directly applicable to other inverse problems.
1 Introduction
1.1 Inverse problems and deep learning
For many physical systems, we observe only the output and strive to infer the input. The inference task is captured by the umbrella term inverse problem. Formally, the underlying system is modeled by a forward mapping , and solving the inverse problem amounts to identifying the inverse mapping ; see Fig. 1. Inverse problems abound in numerous fields and take diverse forms: structure from motion in computer vision [HZ03], image restoration in image processing [GW17], source separation in acoustics [Com10], inverse scattering in physics [CK13], tomography in medical imaging [Her09], soil profile estimation in remote sensing [ENN94], various factorization problems in machine learning [Ge13], to name a few.
Let denote the observed output. Traditionally, inverse problems are mostly formulated as optimization problems of the form
(1.1) 
where represents the input to be estimated. In the formulation, ensures that ( means loss) in an appropriate sense, encodes prior knowledge about —often the input cannot be uniquely determined from the observation alone (i.e., the problem is illposed) and the knowledgebased regularization may help mitigate the issue and make the problem wellposed, and is a tradeoff parameter. For simple inverse problems, Eq. 1.1 may admit a simple closedform solution. For general problems, iterative numerical optimization algorithms are often developed to solve Eq. 1.1 [Kir11].
The advent of deep learning has brought tremendous novel opportunities for solving inverse problems. For example, one can learn datadriven loss term and regularization term , and one can also replace components of iterative methods for solving Eq. 1.1 by trained neural networks.
These ideas has enabled capturing structures in practical data that are traditionally difficult to encode using analytic expressions, and lead to faster and/or more effective algorithms. The most radical is perhaps the endtoend approach: a deep neural network (DNN) is directly set up and trained to approximate the inverse mapping —backed by the famous universal approximation theorem [PMR17]—based on a sufficiently large set of pairs; see Fig. 1. Instead of citing the abundance of individual papers, we refer the reader to the excellent review articles [MJU17, LIMK18, AMOS19] on these developments.
1.2 Difficulty with symmetries
In this paper, we focus on the endtoend learning approach for inverse problems. This approach has recently been widely acclaimed for its remarkable performance on several tasks such as image denoising [XXC12], image superresolution [DLHT14], image deblurring [XRLJ14], and sparse recovery [MB17]. These problems are all linear inverse problems, for which the forward mapping is linear. What about nonlinear problems?
When the forward mapping is nonlinear, we start to see intrinsic symmetries in many systems. We give several quick examples here:

Fourier phase retrieval [BBE17] The forward model is , where and are matrices and is a D oversampled Fourier transform. The operation takes complex magnitudes of the entries elementwise. It is known that translations and conjugate flippings applied on , and also global phase transfer of the form all lead to the same .

Blind deconvolution [LG00, TB10] The forward model is , where is the convolution kernel, is the signal (e.g., image) of interest, and denotes the circular convolution. Both and are inputs. Here, for any , and circularly shifting to the left and shifting to the right by the same amount does not change .

Blind source separation [Com10] The forward model is , where is the mixing matrix and is the source matrix and both and are inputs. The scaling symmetry similar to above is also present here. Moreover, signed permutations are another kind of symmetry, i.e., for any permutation matrix and any diagonal sign matrix .
^{1} 
Synchronization over compact groups [PWBM18] For over a compact group , the observation is a set of pairwise relative measurements for all in an index set . Obviously, any global shift of the form for all , for any , leads to the same set of measurements.
Solving these inverse problems means recovering the input up to the intrinsic system symmetries, as evidently this is the best one can hope for.
Are symmetries good or bad for problem solving? That depends on how we deal with them.
Imagine that we are bored and try to train a DNN to take square root. We randomly draw sufficiently many and thereby generate training samples . We feed the data samples to the DNN and push the button to start torturing our GPU machines. We are happy so long as ultimately the trained DNN output a good estimate of the square root for any input, up to sign. Does it work out as expected? It turns out not quite. For any two points and that are near, the corresponding and may be close in magnitude but differ in sign. This implies that the function determined by the training data points is highly oscillatory (see Fig. 2) and behaves like a function with many and frequent points of discontinuity—Interestingly, the more train samples one gathers, the more serious the problem is. Intuitively, DNNs with continuous or even smooth activation functions struggle when approximating these irregular functions.
1.3 Our contribution: symmetry breaking
The above example is of course contrived and an easy fix for the problem is we only take positive (or negative) ’s. For general inverse problems with symmetries, so long as the symmetries can relate remote inputs to the same output, e.g., all the symmetries we discussed in the quick examples, the above issue of approximating highly irregular functions arises. It is a natural question if our easy fix for learning square root can be generalized. In this paper,

We take the generalized phase retrieval problem as an example, and show that effective symmetry breaking can be performed for both the realvalued and complexvalued versions of the problem. We also corroborate our theory with extensive numerical experiments.

By working out the example, we identify the basic principle of effective symmetry breaking, which can be readily applied to other inverse problems with symmetries.
Notation.
Boldface capitals are matrices (e.g., ) and boldface letters are vectors (e.g., ). means the set of positive reals. Measure by default is the Lebesgue measure for Euclidean spaces. Other notations are either standard or defined inline.
2 Generalized Phase Retrieval
Fourier phase retrieval (PR) alluded to above is a classical problem in computational imaging with a host of applications [SEC15, BBE17]. Despite the existence of effective empirical methods to solve the problem [Fie82], there is little theory on when and why these methods work and also if there are alternatives.
Over the past decade, numerous papers from the signal processing and applied mathematics communities have tried to develop provable methods for PR, based on generalized models in which the Fourier transform is replaced by a generic, often random, linear operator. This is the generalized PR problem. The name arguably is a misnomer. When is replaced by a random linear operator, the randomness often helps kill the translation and conjugate flipping symmetries in Fourier PR and only the global phase symmetry is left. So the generalization often leads to simplified, if not over, PR problems.
Nonetheless, the focus of this paper is not on solving (Fourier) PR per se, but on demonstrating the principle of symmetrybreaking taking generalized PR as an example. We work with two versions of generalized PR:
 Real Gaussian PR

The forward model: , where , , and is iid real Gaussian. The absolutesquare operator is applied elementwise. The only symmetry is sign, as and are mapped to the same .
 Complex Gaussian PR

The forward model: , where , , and is iid complex Gaussian. The modulussquare operator is applied elementwise. The only symmetry is global phase shift, as for all are mapped to the same .
These two versions have been intensively studied in the recent developments of generalized PR; see, e.g., [CSV12, CLS15, SQW17].
2.1 Real Gaussian PR
Recall that in our learning square root example, the sign ambiguity caused the irregularity in the function determined by the training samples. A similar problem occurs here. For two samples that are close in the observation, say and for a small , the paired inputs may be and for a small . Thus, for the inverse mapping that our DNN tries to approximate, a small perturbation in the variable leads to change to the function value, and sharp changes of this kind happen frequently as we have many data samples.
It is tempting to generalize our solution to the square root example. There, the symmetry is the sign and we broke it by restricting the range of desired DNN output to . Here, the symmetry is the global sign of vectors and antipodal points map to the same observation. Thus, an intuitive generalization is to break antipodal point pairs, and a simple solution is to make a hyperplane cut and take samples from only one side of the hyperplane! This is illustrated in Fig. 3 and we use the hyperplane —in , this is the plane.
In , we can see directly from Fig. 3 that the upper half space cut out by the plane is connected. Moreover, it is representative as any point in the space (except for the plane itself) can be represented by a point in this set by appropriate global sign adjustment, and it cannot be made smaller to remain representative. The following proposition says that these properties also hold for highdimensional spaces.
Proposition 2.1.
Let
(2.1)  
(2.2) 
Then the following properties hold:

(connected) is connected in ;

(representative) is of measure zero [Rud06] and for any either or . That is, can represent any point in ;

(smallest) If we remove any single point of , then there is no other point in that can represent . Namely, the resulting set is not representative anymore.
Proof.
First recall the property that if any two points in a given set can be connected by a continuous path lying entirely in the set, then this set must be a connected set [Kel17]. Now any two points can be connected by the line segment . Thus is connected.
Moreover, has Lebesgue measure zero since
(2.3) 
Here is the indicator function on , and is the vector formed by the first coordinates of . We used Tonelli’s theorem [Rud06] to obtain the second equality, and the fact to obtain the third equality. The rest of (ii) is straightforward.
For (iii), suppose that there is another point which can represent up to a global sign flipping. Since both and are in , which means they need to have the same sign for the last component, it must be . We get a contradiction. ∎
The coordinate hyperplane we use is arbitrary, and we can prove similar results for any hyperplane passing through the origin. The set is negligible, as the probability of sampling a point exactly from is zero. In fact, we can break the symmetry in also by recursively applying the current idea. For the sake of simplicity and in view of the probable diminishing return, we will not pursue the refined scheme here.
Does this help solve our problem? Imagine that we have collected a set of training samples for real Gaussian PR. Now we are going to preprocess the data samples according to the above hyperplane cut: for all ’s, if lies above , we simply leave it untouched; if lies below , we switch the sign of ; if happens to lie on , we make a small perturbation to and then adjusts the sign as before accordingly. Now for all . Since is a connected set, when there are sufficiently dense training samples, small perturbations to always only lead to small perturbations to . So we now have a nicely behaved target function to approximate using a DNN. Also, being representative implies that a sufficiently dense sample set should enable reasonable learning.
The set of three properties is also necessary for effective symmetry breaking and learning. Being representative is easy to understand. If the representative set is not the smallest, symmetry is still present for certain points in the set and so symmetry breaking is not complete. Now the set can be smallest representative but not connected. An example in the setting of Proposition 2.1 would be taking out a small strict subset of , say , and consider the set . It is easy to verify that is smallest representative, but not connected. This leaves us the trouble of approximating (locally) highly oscillatory functions.
2.2 Complex Gaussian PR
We now move to the complex case and deal with a different kind of symmetry. Recall that in the complex Gaussian PR, for all are mapped to the same , i.e., global phase shift is the symmetry. These “equivalent” points form a continuous curve in the complex space, contrasting the isolated antipodal point pairs in the real case.
Inspired by the real version, we generalize the three desired properties for symmetry breaking to the complex case. Particularly, “representative” in this context means:
Definition 2.2 (representative).
Let be a subset of . We say that is a representative subset for if the following holds: there is a measure zero subset of such that for any we can find a and an so that .
In words, a subset is representative in this context if except for a negligible subset of , any element of can be represented by an element of after appropriate global phase shift.
Definition 2.3 (smallest representative).
Let be a subset of . We say that is a smallest representative subset for if it is representative and no element in can be represented by a distinct element of .
To construct a smallest representative set for , it is helpful to start with low dimensions. When , any ray stemming from the origin (with origin removed) is a smallest representative subset for . For simplicity, we can take the positive axis . When , it is natural to use the building block for and start to consider product constructions of the form with . Similarly for high dimensions, we try constructions of the form with . Another consideration is the measurezero set. In the real case, we used a coordinate hyperplane. Here, as a natural generalization, we take a complex hyperplane:
(2.4) 
The question now is how to choose to make a smallest representative subset for .
It turns out we actually do not get many choices. The following result says that real positivity assumed for the first coordinate constrains the construction significantly and the rest of coordinates are forced to be the entire complex space .
Proposition 2.4.
If with is a representative subset for , then .
Proof.
We prove by contradiction. Suppose that there is a but . Then for any , and . Since is representative, we can find a and so that
(2.5) 
Since has the first coordinate to be positive real numbers, by looking at the first component of Eq. 2.5 we have
(2.6) 
from where we deduce that and so . This contradicts our construction that . ∎
We now focus on this candidate set
(2.7) 
Our next proposition confirms that this is indeed a good choice.
Proposition 2.5.
Proof.
First, has measure zero due to the same reason as in Eq. 2.3. Next, it is clear that any two points can be connected by the line segment , and so is a connected set. To see is representative, for any where , one can choose so that . To show it is also smallest, we use a similar argument to that in Proposition 2.4. Let where we write with . If another element can be represented by , namely, if there is such that , then we need to have and . That is,
(2.8) 
Since , Eq. 2.8 implies that . But this contradicts with that and thus no element in can be represented by a distinct element in . ∎
So our construction enjoys the three desired properties, similar to the real case, despite that the problem symmetry is different here. Once we emulate the data preprocessing step for the real case, i.e., all ’s for the training data points are mapped into , we obtain an effective symmetry breaking algorithm for complex Gaussian PR.
For general inverse problems, although the symmetries might be very different than here and the sample spaces could also be diverse, the three properties we have identified, which concern only the geometric and topological aspects of the space, can potentially generalize as a basic principle for effective symmetry breaking.
3 Related Work
As alluded to above, recently there have been intensive research efforts on solving inverse problems using deep learning [MJU17, LIMK18, AMOS19]. The endtoend approach is attractive not only because of its simplicity, but also because (i) we do not even need to know the forward models, so long as we can gather sufficiently many data samples and weak system properties such as symmetries—e.g., this is handy for complex imaging systems [HTT16, LXT18]; (ii) or alternatives have rarely worked, and a good example is Fourier PR [Fie82, SLLB17].
Besides the linear inverse problems, the endtoend deep learning approach has been empirically applied to a number of problems with symmetries, e.g., blind image deblurring (i.e., blind deconvolution) [TGS18], realvalued Fourier phase retrieval [SLLB17], 3D surface tangents and normal prediction [HZFG19], nonrigid structurefrommotion [KL19, WLL20]. We believe that our work is the first to delineate the symmetry problem confronting effective learning and propose a solution principle that likely generalizes to other inverse problems.
Mathematically, points related by symmetries form an equivalence class and these equivalence classes form a partition of the input space for the forward model. Our symmetric breaking task effectively consists in finding a consistent representation for the equivalence classes, where the consistency here requires the set of the representatives to be topologically connected.
4 Numerical Experiments
In this section, we set up various numerical experiments to verify our claim that effective symmetry breaking facilitates efficient learning. Specifically, we work with both the real and complex Gaussian PR problems, and try to answer the following questions:

Do our symmetry breaking schemes help get us improved recovery performance?

How does the performance vary when the problem dimension changes?

What is the effect of the number of training samples on the recovery performance?
4.1 Basic experimental setups
Learning models
We set up an endtoend pipeline and use neural network models to approximate the inverse mappings, as is typical done in this approach. The following are brief descriptions of the models used in our comparative study. Recall that in our problem setup for Gaussian PR, is the dimension for and is the dimension for .

Neural Network (NN): fully connected feedforward NN with architecture 25612864.

Wide Neural Network (WNN): we increase the size of hidden units of the NN by a factor of 2. The architecture is 512256128.

Deep Neural Network (DNN): we increase the number and size of hidden layers of the NN by adding two more layers. The architecture is 20481024512256128.

Nearest Neighbors (NN): NN regression, where prediction is the average of the values of nearest neighbors. In this work we use .
Data
We take , which is just above the threshold for ensuring injectivity of the forward models up to the intrinsic symmetry [BCE06]
We conduct experiments with varying input dimension and dataset size. Specifically, we experiment with and dataset size of , , , , respectively. We do not test higher dimensions, in view of the exponential growth of sample requirement to cover the highdimensional ball. For most practical inverse problems where the input data tend to possess lowdimensional structures despite the apparent high dimensions, the sample requirement will not be an issue and our symmetric breaking scheme naturally adapts to the structures. One may suggest performing a realdata experiment on natural images, as was done in numerous previous papers [CSV12, CLS15, SQW17]. This is sensible, but not interesting in the Gaussian PR context, as the sign (resp. phase transfer) symmetry for real (resp. complex) PR is naturally broken due to the restriction of the image values to be nonnegative. As we argued before, the Gaussian settings erase the essential difficulties of Fourier PR. Realdata experiments will become nontrivial in Fourier PR, as the nonnegative restriction does not kill the flipping and translation symmetries. But that entails working out the specific symmetry breaking strategy for Fourier PR; we leave it as future work.
For all neural network models, we train them based on two variants of the training samples: one with symmetry untouched (i.e., before symmetry breaking) and one with symmetry breaking (i.e., after symmetry breaking). The former just leaves the samples unchanged, whereas the latter preprocesses the training samples using the procedures we described in Section 2 for the real and complexvalued Gaussian PR, respectively. To distinguish the two variants, we append our neural network model names with “A” to indicate after symmetry breaking and “B” to indicate before symmetry breaking.
Training and error metric
The mean loss is used in the objective. We use the Adam optimizer [KB14] and train all models for a maximum of epochs. The learning rate is set as by default and training is stopped if the validation loss does not reduce for consecutive epochs. The validation set is also used for hyperparameter tuning. To train the models for the complex PR, real and complex parts of any complex vector are concatenated into a long real vector. The Nearest Neighbor model is fit on the whole training dataset and serves as a baseline for each experiment.
To imitate the realworld test scenario, we do not perform symmetry breaking on the test data. To measure performance, we use the normalized mean square error (MSE) which is rectified to account for the symmetry:
(real)  (4.1)  
(complex)  (4.2) 
where is the prediction by the learned models.
4.2 Quantitative results
Sample  NN  NN  NN  WNN  NN  WNN  DNN  NN  DNN  

2e4  0.0010  0.0017  0.0283  0.0018  0.0283  0.0010  0.0019  0.0284  
5e4  0.0012  0.0282  0.0008  0.0017  0.0284  0.0007  0.0014  0.0285  
1e5  0.0010  0.0284  0.0012  0.0283  0.0013  0.0018  0.0284  
1e6  0.0007  0.0283  0.0005  0.0006  0.0283  0.0007  0.0008  0.0283  


2e4  0.0011  0.0020  0.0082  0.0009  0.0022  0.0082  0.0021  0.0082  
5e4  0.0009  0.0016  0.0082  0.0018  0.0082  0.0009  0.0020  0.0082  
1e5  0.0009  0.0016  0.0082  0.0015  0.0082  0.0008  0.0017  0.0082  
1e6  0.0007  0.0013  0.0082  0.0010  0.0082  0.0009  0.0011  0.0082  


2e4  0.0012  0.0017  0.0038  0.0016  0.0038  0.0016  0.0038  
5e4  0.0011  0.0014  0.0038  0.0009  0.0014  0.0038  0.0015  0.0038  
1e5  0.0010  0.0013  0.0038  0.0008  0.0013  0.0038  0.0013  0.0038  
1e6  0.0008  0.0009  0.0038  0.0010  0.0038  0.0009  0.0010  0.0038 
Sample  NN  NN  NN  WNN  NN  WNN  DNN  NN  DNN  
5 
2e4  0.0016  0.0044  0.0786  0.0087  0.0882  0.0013  0.0045  0.0699  

5e4  0.0039  0.0718  0.0012  0.0038  0.0669  0.0019  0.0039  0.0697  

1e5  0.0021  0.0473  0.0032  0.0034  0.0942  0.0011  0.0013  0.0854  

1e6  0.0006  0.0642  0.0064  0.0072  0.0453  0.0014  0.0015  0.0731  


10 
2e4  0.0079  0.0237  0.0452  0.0065  0.0080  0.0453  0.0239  0.0380  

5e4  0.0066  0.0428  0.0089  0.0191  0.0419  0.0082  0.0181  0.0400  

1e5  0.0097  0.0139  0.0436  0.0058  0.0431  0.0055  0.0086  0.0453  
1e6  0.0136  0.0179  0.0448  0.0085  0.0162  0.0432  0.0118  0.0399  


15 
2e4  0.0282  0.0287  0.0282  0.0143  0.0296  0.0180  0.0189  0.0277  

5e4  0.0192  0.0272  0.0313  0.0126  0.0308  0.0172  0.0233  0.0313  
1e5  0.0188  0.0226  0.0258  0.0269  0.0295  0.0177  0.0206  0.0274  

1e6  0.0136  0.0179  0.0448  0.0184  0.0395  0.0182  0.0202  0.0283 
Table 1 provides test errors for all models trained for real PR, and likewise Table 2 presents test errors for complex PR. All models for the same combination of input dimension and sample size use the same set of data. Blues numbers in the tables indicate the best performing model across all the models in each row.
We first note that for the same NN architecture with any dimensionsample combination, symmetry breaking always leads to substantially improved performance. Without symmetry breaking, i.e., as shown in the  columns, the estimation errors are always worse, if not significantly so, than the simple baseline NN model. By contrast, symmetry breaking as shown in the  columns always leads to improved performance compared to the baseline. To rule out the possibility that the inferior performance of B’s is due to the capacity of the NNs, we can compare the performance of NN with that of WNN and DNN, where the latter two have and more parameters than the plain NN, as shown in Table 3.
Models  Real  Complex 

Neural Network  57,743  58,718 
Wide Neural Network  197,391  199,326 
Deep Neural Network  2,914,063  2,915,998 
From Table 1 and Table 2, it is clear that the large capacities do not lead to improved performance, suggesting that our plain NNs are probably already powerful enough. Moreover, for a fixed learning model, increasing the number of samples beyond also only yields marginally improved errors, indicating that bad performance cannot be attributed to lack of samples. These observations together show that ’s are inefficient learners, as they do not explicitly handle the symmetry problem.
Moreover, as the dimension grows, there is a persistent trend that the B models performs incrementally better. This might be counterintuitive at first sight, as the coverage of the space becomes sparser as the dimension grows with same number of random samples. A reverse trend could be expected. But as we hinted at the end of Section 1.2, more training samples also cause more wildly behaved functions—the problem becomes less severe as the dimension grows as the density of sample points becomes smaller. In fact, when the sample density is extremely low, the other trend that is dictated by the lack of samples could reveal. Nonetheless, here we focus on the dataintensive regime. Overall, the difficulty of approximating highly oscillatory functions is evident.
4.3 Difficulty with symmetries: what happened?
In this section, we investigate several aspects of the neural networks in the hope that some aspect can potentially help overcome the learning difficulties with symmetries. Based on the above discussion, we focus on the NN model. Typically, besides the network size, performance of neural networks is also strongly affected by the minibatch size, learning rate, and regularization. To analyze the impact of the latter three, we vary each one of the parameters while keeping the others fixed. We work with real PR only and expect the situation for complex PR to be similar. To keep a reasonably fast run time while not hurting the performance, we take data samples, which seems sufficient for the above results.
Effect of minibatch size
The minibatch size in stochastic optimization algorithms such as Adam that we use is considered to have a substantial impact on the performance of neural networks [Ben12]. To see if this can help with the performance, we change the size to sweep several orders of magnitudes, i.e., , and also experiment with different dimensions, i.e., , on NN. From the results presented in Fig. 4 (Left), we conclude that varying the minibatch size has a negligible effect on the test error.
Effect of learning rate
The learning rate is the most critical hyperparameter [Jac88] that guides the change in model weights in response to the estimated error. To examine its effect on the test error, we vary it across six orders of magnitude: , , , , , , and retrain NN. Again, the magnitude of the test error roughly remains the same across the distinct learning rates, as shown in Fig. 4 (Right).
Effect of regularization
We explore three regularization schemes, , and + . Table 4 shows the results after our retraining of NN with the different schemes. It appears that no scheme clearly wins out.
Regularization  

0.02848  0.00831  0.00392  
0.02847  0.00830  0.00392  
+  0.02846  0.00830  0.00392 
These results reinforce our claim that the bad performance of neural network learning without symmetry breaking is due to the intrinsic difficulty of approximating irregular functions, not due to suboptimal choice of neural network architecture or training hyperparameters.
5 Conclusion
In this paper, we explain how symmetries in the forward processes can lead to difficulty—approximating highly oscillatory functions—in solving the resulting inverse problems by an endtoend deep learning approach. Using the real and complex Gaussian PR problems as examples, we show how effective symmetry breaking can be performed to remove the above difficulties in learning, and we also verify the effectiveness of our scheme using extensive numerical experiments. In particular, we show through experiments that without carefully dealing with the symmetries, learning can be highly inefficient and the performance can be inferior to simple baseline methods.
We also identify a basic principle for breaking symmetry and phrase the task as finding connected representative set for equivalence classes. The task seems highly generic and only pertains to the certain topological and geometrical structure of the data space. This favorably suggests that our strategy is probably universal and can be adapted for other inverse problems.
Footnotes
 For both blind deconvolution and blind source separation, depending on structures of the inputs, there may be other symmetries that we have not covered here. The symmetries we have discussed tend to be persistent nonetheless.
 For real Gaussian PR, the threshold is near . Here, we use for both the real and complex versions for simplicity.
References
 Simon Arridge, Peter Maass, Ozan Öktem, and CarolaBibiane Schönlieb, Solving inverse problems using datadriven models, Acta Numerica 28 (2019), 1–174.
 Tamir Bendory, Robert Beinert, and Yonina C. Eldar, Fourier phase retrieval: Uniqueness and algorithms, Compressed Sensing and its Applications, Springer International Publishing, 2017, pp. 55–91.
 Radu Balan, Pete Casazza, and Dan Edidin, On signal reconstruction without phase, Applied and Computational Harmonic Analysis 20 (2006), no. 3, 345–356.
 Yoshua Bengio, Practical recommendations for gradientbased training of deep architectures, Neural networks: Tricks of the trade, Springer, 2012, pp. 437–478.
 David Colton and Rainer Kress, Inverse acoustic and electromagnetic scattering theory, Springer New York, 2013.
 Emmanuel J. Candes, Xiaodong Li, and Mahdi Soltanolkotabi, Phase retrieval via wirtinger flow: Theory and algorithms, IEEE Transactions on Information Theory 61 (2015), no. 4, 1985–2007.
 Pierre Comon, Handbook of blind source separation: Independent component analysis and applications, ACADEMIC PR INC, 2010.
 Emmanuel J. Candès, Thomas Strohmer, and Vladislav Voroninski, PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming, Communications on Pure and Applied Mathematics 66 (2012), no. 8, 1241–1274.
 Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang, Learning a deep convolutional network for image superresolution, European conference on computer vision, Springer, 2014, pp. 184–199.
 Dara Entekhabi, Hajime Nakamura, and Eni G Njoku, Solving the inverse problem for soil moisture and temperature profiles by sequential assimilation of multifrequency remotely sensed observations, IEEE Transactions on Geoscience and Remote Sensing 32 (1994), no. 2, 438–448.
 J. R. Fienup, Phase retrieval algorithms: a comparison, Applied Optics 21 (1982), no. 15, 2758.
 Rong Ge, Provable algorithms for machine learning problems, Ph.D. thesis, Princeton University, 2013.
 Rafael C. Gonzalez and Richard E. Woods, Digital image processing (4th edition), Pearson, 2017.
 Gabor T. Herman, Fundamentals of computerized tomography, Springer London, 2009.
 Ryoichi Horisaki, Ryosuke Takagi, and Jun Tanida, Learningbased imaging through scattering media, Optics Express 24 (2016), no. 13, 13738.
 Richard Hartley and Andrew Zisserman, Multiple view geometry in computer vision, Cambridge university press, 2003.
 Jingwei Huang, Yichao Zhou, Thomas Funkhouser, and Leonidas J Guibas, Framenet: Learning local canonical frames of 3d surfaces from a single rgb image, Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 8638–8647.
 Robert A Jacobs, Increased rates of convergence through learning rate adaptation, Neural networks 1 (1988), no. 4, 295–307.
 Diederik P Kingma and Jimmy Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014).
 John L Kelley, General topology, Courier Dover Publications, 2017.
 Andreas Kirsch, An introduction to the mathematical theory of inverse problems, Springer New York, 2011.
 Chen Kong and Simon Lucey, Deep nonrigid structure from motion, Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 1558–1567.
 Edmund Y. Lam and Joseph W. Goodman, Iterative statistical approach to blind image deconvolution, Journal of the Optical Society of America A 17 (2000), no. 7, 1177.
 Alice Lucas, Michael Iliadis, Rafael Molina, and Aggelos K. Katsaggelos, Using deep neural networks for inverse problems in imaging: Beyond analytical methods, IEEE Signal Processing Magazine 35 (2018), no. 1, 20–36.
 Yunzhe Li, Yujia Xue, and Lei Tian, Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media, Optica 5 (2018), no. 10, 1181.
 Ali Mousavi and Richard G Baraniuk, Learning to invert: Signal recovery via deep convolutional networks, 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, 2017, pp. 2272–2276.
 Michael T. McCann, Kyong Hwan Jin, and Michael Unser, Convolutional neural networks for inverse problems in imaging: A review, IEEE Signal Processing Magazine 34 (2017), no. 6, 85–95.
 Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao, Why and when can deepbut not shallownetworks avoid the curse of dimensionality: A review, International Journal of Automation and Computing 14 (2017), no. 5, 503–519.
 Amelia Perry, Alexander S. Wein, Afonso S. Bandeira, and Ankur Moitra, Messagepassing algorithms for synchronization problems over compact groups, Communications on Pure and Applied Mathematics 71 (2018), no. 11, 2275–2322.
 Walter Rudin, Real and complex analysis, Tata McGrawhill education, 2006.
 Yoav Shechtman, Yonina C Eldar, Oren Cohen, Henry Nicholas Chapman, Jianwei Miao, and Mordechai Segev, Phase retrieval with application to optical imaging: a contemporary overview, IEEE signal processing magazine 32 (2015), no. 3, 87–109.
 Ayan Sinha, Justin Lee, Shuai Li, and George Barbastathis, Lensless computational imaging through deep learning, Optica 4 (2017), no. 9, 1117.
 Ju Sun, Qing Qu, and John Wright, A geometric analysis of phase retrieval, Foundations of Computational Mathematics 18 (2017), no. 5, 1131–1198.
 T. L. Tonellot and M. K. Broadhead, Sparse seismic deconvolution by method of orthogonal matching pursuit, 72nd EAGE Conference and Exhibition incorporating SPE EUROPEC 2010, EAGE Publications BV, jun 2010.
 Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia, Scalerecurrent network for deep image deblurring, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, jun 2018.
 Chaoyang Wang, ChenHsuan Lin, and Simon Lucey, Deep nrsfm++: Towards 3d reconstruction in the wild, arXiv:2001.10090 (2020).
 Li Xu, Jimmy SJ Ren, Ce Liu, and Jiaya Jia, Deep convolutional neural network for image deconvolution, Advances in neural information processing systems, 2014, pp. 1790–1798.
 Junyuan Xie, Linli Xu, and Enhong Chen, Image denoising and inpainting with deep neural networks, Advances in neural information processing systems, 2012, pp. 341–349.