On denoising modulo samples of a function
Abstract
Consider an unknown smooth function , and say we are given noisy mod 1 samples of , i.e., for , where denotes noise. Given the samples , our goal is to recover smooth, robust estimates of the clean samples . We formulate a natural approach for solving this problem which works with angular embeddings of the noisy mod 1 samples over the unit complex circle, inspired by the angular synchronization framework. Our approach amounts to solving a quadratically constrained quadratic program (QCQP) which is NPhard in its basic form, and therefore we consider its relaxation which is a trust region subproblem and hence solvable efficiently. We demonstrate its robustness to noise via extensive numerical simulations on several synthetic examples, along with a detailed theoretical analysis. To the best of our knowledge, we provide the first algorithm for denoising mod 1 samples of a smooth function, which comes with robustness guarantees.
1 Introduction
The problem of recovering a function from noisy samples of its values has received recent interest both in the literature and the media (MIT News [15]). This recent surge of interest was motivated by a new family of analogtodigital converters (ADCs). Traditional ADCs have voltage limits in place that cut off the signal at the maximum allowed voltage, whenever it exceeds the limit. In very recent work, the authors of [2] introduced a technique, denoted as unlimited sampling that is able to accurately digitize signals whose voltage peaks are much larger than the voltage limits of an ADC. Their work was inspired by a new type of experimental ADC with a modulo architecture (the socalled selfreset ADC, that has already been prototyped) which captures not the voltage of a signal but its modulo, by having the voltage reset itself whenever it crosses a prespecified threshold. In other words, the ADC captures the remainder obtained when the voltage of an analog signal is divided by the maximum voltage of the ADC.
The multidimensional version of this problem has a long history in the geosciences literature, often dubbed as the phase unwrapping problem. Phase unwrapping refers to the process of recovering unambiguous phase values from phase data that are measured modulo rad (wrapped data). Instances of this problem arise in many applications, with an initial spike of interest in early 1990s spurred by the synthetic aperture radar interferometry (InSAR) technology for determining the surface topography and deformation of the Earth, which motivated the development of twodimensional phase unwrapping algorithms. Most of the commonly used phase unwrapping algorithms relate the phase values by first differentiating the phase field and subsequently reintegrating, adding back the missing integral cycles with the end goal of obtaining a more continuous result [22]. Other approaches explored in the literature include combinations of leastsquares techniques [17], methods exploiting measures of data integrity to guide the unwrapping process [4], and several techniques employing neural network or genetic algorithms [6]. The threedimensional version of the problem [11] has received relatively little attention, a recent line of work in this direction being [16].
As a word of caution to the reader, we note that this problem is different from the celebrated phase retrieval, a classical problem in optics that has attracted a surge of interest in recent years [5, 12], which attempts to recover an unknown signal from the magnitude (intensity) of its Fourier transform. Just like phase retrieval, the recovery of a function from mod 1 measurements is, by its very nature, an illposed problem, and one needs to incorporate prior structure on the signal, which in our case is smoothness of (in an analogous way to how enforcing sparsity renders the phase retrieval problem wellposed). While there have been a variety of approaches to phase retrieval, recent progress in the compressed sensing and convex optimizationbased signal processing have inspired new potential research directions. The approach we pursue in this paper is inspired by developments in the trust region subproblem [1] and group synchronization [20, 7] literatures.
At a high level, one would like to recover denoised samples (i.e., smooth, robust estimates) of from its noisy mod 1 versions. A natural mode of attack for this problem is the following two stage approach. In the first stage, one recovers denoised mod 1 samples of , and then in the (unwrapping) second stage, one uses these samples to recover the original realvalued samples of . In this paper, we mainly focus on the first stage, which is a challenging problem in itself. To the best of our knowledge, we provide the first algorithm for denoising samples of a function, which comes with robustness guarantees. In particular, we make the following contributions.

We formulate a general framework for denoising the mod 1 samples of ; it involves mapping the noisy mod 1 values (lying in ) to the angular domain (i.e. in ), and leads to a QCQP formulation which is NPhard. We show a relaxation to this QCQP which is a trust region subproblem, and hence solvable efficiently.

We test the above method on several synthetic examples which demonstrate that it performs well for reasonably high noise levels. To complete the picture, we also implement the second stage with a simple recovery method for recovering the (real valued) samples of , and show that it performs surprisingly well via extensive simulations.
Outline of paper.
Section 2 formulates the problem formally, and introduces notation. Section 3 sets up the mod 1 denoising problem as a (NPhard) smoothness regularized leastsquares problem in the angular domain. Section 4 describes its relaxation to a trustregion subproblem, and some possible approaches for recovering the samples of , along with our complete twostage algorithm. Section 5 contains approximation guarantees for our algorithm for recovering the denoised mod 1 samples of . Section 6 contains numerical experiments on different synthetic examples. Section 7 summarizes our results and contains a discussion of possible future research directions. Finally, the Appendix contains supplementary material related to the proofs and additional numerical experiments.
2 Problem setup
Consider a smooth, unknown function , and a uniform grid on ,
(2.1) 
We assume that we are given mod 1 samples of on the above grid. Note that for each sample
(2.2) 
with and , we have . The modulus is fixed to 1 without loss of generality since . This is easily seen by writing , with , and observing that . In particular, we assume that the mod 1 samples are noisy, and consider the following noise models.

Arbitrary bounded noise
(2.3) 
Gaussian noise
(2.4) where i.i.d.
We will denote by for convenience. Our aim is to recover smooth, robust estimates (up to a global shift) of the original samples from the measurements . We will assume to be Hölder continuous meaning that for constants , ,
(2.5) 
The above assumption is quite general and reduces to Lipschitz continuity when .
Notation.
Scalars and matrices are denoted by lower case and upper cases symbols respectively, while vectors are denoted by lower bold face symbols. Sets are denoted by calligraphic symbols (eg., ), with the exception of for . The imaginary unit is denoted by . The notation introduced throughout Sections 3 and 4 is summarized in Table 1.
3 Smoothness regularized least squares in the angular domain
Our algorithm essentially works in two stages.

Denoising stage. Our goal here is to denoise the mod 1 samples, which is also the main focus of this paper. In a nutshell, we map the given noisy mod 1 samples to points on the unit complex circle, and solve a smoothness regularized, constrained leastsquares problem. The solution to this problem, followed by a simple postprocessing step, gives us denoised mod 1 samples of .

Unwrapping stage. The second stage takes as input the above denoised mod 1 samples, and recovers an estimate to the original realvalued samples of (up to a global shift).
We start the denoising stage by mapping the mod 1 samples to the angular domain, with
(3.1) 
denoting the respective representations of the clean mod 1 and noisy mod 1 samples on the unit circle in , where the first equality is due to the fact that , with .
The choice of representing the mod 1 samples in (3.1) is very natural for the following reason. For points sufficiently close, the samples will also be close (by Hölder continuity of ). While the corresponding wrapped samples can still be far apart, the complex numbers and will necessarily be close to each other^{*}^{*}*Indeed, (since ).. This is illustrated in the toy example in Figure 1.
Figure 2 is the analogue of Figure 1, but for a noisy instance of the problem, making the point that the angular representation facilitates the denoising process. For points sufficiently close, the corresponding samples will also be close in the real domain, by Hölder continuity of . When measurements get perturbed by noise, the distance in the real domain between the noisy mod 1 samples can greatly increase and become close to 1 (in this example, the point B gets perturbed by noise, hits the floor and ”resets” itself). However, in the angular embedding space, the two points still remain close to each other, as depicted in Figure 1(c).
Figure 10 is the analogue of Figure 3, but for the Gaussian noise model. The plots in the top row provide intuition for the interplay between the change in (the observed noisy f mod 1 values) versus change in (the noisy quotient). The bottom three rows show the clean, noisy and denoised (via QCQP) mod 1 samples, for increasing levels of noise.
Consider the graph with where index corresponds to the point on our grid, and denotes the set of edges for a suitable parameter . A natural approach for recovering smooth estimates of would be to solve the following optimization problem
(3.2) 
Here, is a regularization parameter, which along with , controls the smoothness of the solution. Let us denote to be the Laplacian matrix associated with , defined as
(3.3) 
Denoting , the second term in (3.2) can be simplified to
(3.4) 
Next, denoting , we can further simplify the first term in (3.2) as follows.
(3.5)  
(3.6) 
This gives us the following equivalent form of (3.2)
(3.7) 
4 A trust region based relaxation for denoising modulo 1 samples
Unfortunately, the nature of the constraints render (3.7) NPhard [3, 13]. Hence, we relax these constraints to one where the points lie on a sphere of radius , resulting in the following optimization problem
(4.1) 
It is straightforward to reformulate (4.1) in terms of real variables. We do so by introducing the following notation for the realvalued versions of the variables (clean signal), (noisy signal), and (free variable)
(4.2) 
and the corresponding blockdiagonal Laplacian
(4.3) 
In light of this, the optimization problem (4.1) can be equivalently formulated as
(4.4) 
which is formally shown in the appendix for completeness. Let us note that the Laplacian matrix is positive semidefinite (p.s.d), with its smallest eigenvalue with multiplicity (since is connected). Therefore, is also p.s.d, with smallest eigenvalue with multiplicity .
(4.4) is actually an instance of the so called trust region sub problem (TRS) with equality constraint (which we denote by from now on), where one minimizes a general quadratic function (not necessarily convex), subject to a sphere constraint. For completeness, we also mention the closely related trust region sub problem with inequality constraint (denoted by ), where we have a ball constraint. There exist several algorithms that efficiently solve (cf., [21, 14, 19, 18, 9, 1]) and also some which explicitly solve (cf., [10, 1]). In particular, we note the recent work in [1] which showed that trust region subproblems can be solved to high accuracy via a single generalized eigenvalue problem. In our experiments, we employ their algorithm for solving (4.4).
Rather surprisingly, one can fully characterize^{†}^{†}†Discussed in detail in the appendix for completeness. the solutions to both and . The following Lemma 1 characterizes the solution for (4.4); it follows directly from [21, Lemma 2.4, 2.8] (also [10, Lemma 1]).
Lemma 1.
is a solution to (4.4) iff and such that (a) and (b) . Moreover, if , then the solution is unique.
Let , with , and denote the eigenvalues, respectively eigenvectors, of . Note that , and since is connected. Let us denote the null space of by , so . We can now analyze the solution to (4.4) with the help of Lemma 1, by considering the following two cases.
Case 1. . The solution is given by
(4.5) 
for a unique satisfying . Indeed, denoting , we can see that has a pole at and decreases monotonically to as . Hence, there exists a unique such that . The solution will be unique by Lemma 1, since holds.
Case 2. . This second scenario requires additional attention. To begin with, note that
(4.6) 
is now well defined, i.e., is not a pole of anymore. If , then as before, we can again find a unique satisfying . The solution is given by and is unique since (by Lemma 1).
In case , we set and define our solution to be of the form
(4.7) 
where denotes pseudoinverse and . In particular, for any given , we obtain as the solutions to (4.4), with being the solutions to the equation
(4.8) 
Hence the solution is not unique if .
4.1 Recovering the denoised mod 1 samples
The solution to (4.4) is a vector . Let be the complex representation of as per (4.2) so that . Denoting to be the component of , note that is not necessarily equal to one. On the other hand, recall that for the ground truth . We obtain our final estimate to by projecting onto the unit complex disk
(4.9) 
In order to measure the distance between and , we will use the so called wraparound distance on denoted by , where
(4.10) 
for . We will now show that if is sufficiently close to for each , then each will be correspondingly small. This is stated precisely in the following lemma, its proof being deferred to the appendix.
Lemma 2.
For , let hold for each . Then, for each
(4.11) 
4.2 Unwrapping stage and main algorithm
Having recovered the denoised mod 1 samples for , we now move onto the next stage of our method where the goal is to recover the samples , for which we discuss two possible approaches.
1. Quotient tracker (QT) method. The first approach for unwrapping the mod 1 samples is perhaps the most natural one, we outline it below for the setting where is a line graph, i.e., . It is based on the idea that provided the denoised mod 1 samples are very close estimates to the original clean mod 1 samples, then we can sequentially find the quotient terms, by checking whether , for a suitable threshold parameter . More formally, by initializing consider the rule
(4.12) 
Clearly, if for each , then for sufficiently large, the procedure (4.2) will result in correct recovery of the quotients. However, it is also obviously sensitive to noise, and hence would not be a viable option when the noise level is high.
2. Ordinary least squares (OLS) based method. A robust alternative to the aforementioned approach is based on directly recovering the function via a simple least squares problem. Recall that in the noisefree case, , , and consider, for a pair of nearby points , the difference . The OLS formulation we solve stems from the observation that, if for a small , then . This intuition can be easily gained from the top plots of Figure 3, especially 2(a), which pertains to the noisy case (but in the low noise regime ), that plots versus , where denotes the noisy quotient of sample , and the noisy remainder. For small enough , we observe that . Whenever , we see that , while , indicates that . Throughout all our experiments we set . In Figure 4 we also plot the true quotient , which can be observed to be piecewise constant, in agreement with our above intuition.
For a graph with , and for a suitable threshold parameter , this intuition leads us to estimate the function values as the leastsquares solution to the overdetermined system of linear equations (2.2), without involving the quotients . To this end, we consider a linear system of equations for the function differences
(4.13) 
and solve it in the leastsquares sense. (4.13) is analogous to (4.2), except that we now recover collectively as the leastsquares solution to (4.13). Denoting by the leastsquares matrix associated with the overdetermined linear system (4.13), and letting , the system of equations can be written as . Note that the matrix is sparse with only two nonzero entries per row, and that the allones vector lies in the null space of , i.e., . Therefore, we will find the minimum norm leastsquares solution to (4.13), and recover only up to a global shift.
Algorithm 1 summarizes our twostage method for recovering the samples of (up to a global shift). Figure 3 shows additional noisy instances of the Uniform noise model. The scatter plots on the first row show that, as the noise level increases, the function (4.2) will produce more and more errors in (4.13). The remaining plots show the corresponding f mod 1 signal (clean, noisy, and denoised via QCQP) for three levels of noise.
5 Analysis for the arbitrary bounded noise model
We now provide some approximation guarantees for the solution to (4.4) for the arbitrary bounded noise model (2.3). In particular, we consider a slightly modified version of this model, assuming
(5.1) 
holds true for some . This is reasonable, since holds in general by triangle inequality. Also, note for (2.3) that , and thus . Hence, while a small enough uniform bound on would of course imply (5.1), however, clearly (5.1) can also hold even if some of the ’s are large.
Theorem 1.
Under the above notation and assumptions, consider the arbitrary bounded noise model in (2.3), with satisfying for . Let , and let denote the null space of .
The following useful Corollary of Theorem 1 is a direct consequence of the fact that for all , since is positive semidefinite.
Corollary 1.
Before presenting the proof of Theorem 1, some remarks are in order.
1. Theorem 1 give us a lower bound on the correlation between , where clearly, . Note that the correlation improves when the noise term decreases, as one would expect. The term effectively arises on account of the smoothness of , and is an upper bound on the term (made clear in Lemma 4). Hence as the number of samples increases, goes to zero at the rate (for fixed ). Also note that the lower bound on readily implies the norm bound .
2. The term represents the smoothness of the observed noisy samples. While an increasing amount of noise would usually render to be more and more nonsmooth, and thus typically increase , note that this would be met by a corresponding increase in , and hence the lower bound on the correlation would not necessarily improve.
3. It is easy to verify that (5.1) implies . Thus for , which is feasible for , we have a bound on correlation which is better than the bound in Corollary 1 by a term. However, the solution to is a smooth estimate of (and hence more interpretable), while is typically highly nonsmooth.
Proof of Theorem 1.
The proof of Theorem 1 relies heavily on Lemma 3 outlined below, whose proof is deferred to the appendix.
Lemma 3.
We now upper bound the term in (5.5) using the Hölder continuity of . This is formally shown below in the form of Lemma 4, its proof is deferred to the appendix.
Lemma 4.
Lastly, we lower bound the term in (5.5) using knowledge of the structure of the solution . This is outlined below as Lemma 5, its proof is deferred to the appendix. This completes the proof of Theorem 1.
Lemma 5.
Denoting to be the null space of , the following holds for the solution to (4.4).

If then is unique and
(5.7) 
If and , then is unique and
(5.8)
6 Numerical experiments
This section contains numerical experiments for the two noise models discussed in Section 2, the Uniform model (with samples generated uniformly in for bounded ) and the Gaussian model. For each experiment (averaged over 20 trials), we show the RMSE error on a log scale, for denoised samples of mod 1 and . For the latter, we compute the RMSE after we mod out the best global shift^{‡}^{‡}‡Any algorithm that takes as input the samples will be able to recover only up to a global shift.. We compare the performance of three algorithms.
OLS denotes the algorithm based on the least squares formulation (4.2) used to recover samples of , and works directly with noisy samples; the estimated values are then obtained as the corresponding versions. QCQP denotes Algorithm 1 where the unwrapping stage is performed via OLS (4.2). iQCQP denotes an iterated version of QCQP, wherein we repeatedly denoise the noisy estimates (via Stage of Algorithm 1) for iterations, and finally perform the unwrapping stage via OLS (4.2), to recover the sample estimates of .
Figure 4 shows several denoising instances as we increase the noise level in the Uniform noise model (). Notice that OLS starts failing at , while QCQP still estimates the samples of well. Interestingly, iQCQP performs quite well, even for (where QCQP starts failing) and produces highly smooth, and accurate estimates. It would be interesting to investigate the properties of iQCQP in future work. Analogous results for the Gaussian model are shown in the appendix.
Figures 5, 6 plot RMSE (on a log scale) for denoised mod 1 and samples versus the noise level, for the Uniform noise model. They illustrate the importance of the choice of the regularization parameters . If is too small (eg., ), then QCQP has negligible improvement in performance, and sometimes also has worse RMSE than the raw noisy samples. However, for a larger (), QCQP has a strictly smaller error than OLS and the raw noisy samples. Interestingly, iQCQP typically performs very well, even for .
Figure 9 plots the RMSE (on a log scale) for both the denoised mod 1 samples, and samples of , versus (for Uniform noise model). Observe that for large enough , QCQP shows strictly smaller RMSE than both the initial input noisy data, and OLS. Furthermore, we remark that iQCQP typically has superior performance to QCQP except for small values of .
7 Concluding remarks
We considered the problem of denoising noisy mod 1 samples of a function. We proposed a method centered around solving a trust region subproblem with a sphere constraint, and provided extensive empirical evidence as well as theoretical analysis, that highlight its robustness to noise. There are several possible directions for future work. One would be to better understand the unwrapping stage of our approach, and potentially to explore a patchbased divideandconquer method that solves the problem locally and then integrates the local solutions into a globally consistent framework, in the spirit of existing methods from the group synchronization literature [20, 7]. Another natural question to consider would be “single stage methods” that directly output denoised estimates to the original realvalued samples. Yet another direction, which we address in ongoing work, is to demonstrate the robustness of the proposed method to other noise models such as Gaussian and Bernoulliuniform noise, and provide theoretical guarantees. Finally, an interesting question would be to consider the regression problem, where one attempts to learn a smooth, robust estimate to the underlying mod 1 function itself (not just the samples) and/or the original , from noisy mod 1 samples of .
8 Acknowledgements
We thank Joel Tropp for bringing this problem to our attention. We are grateful to Afonso Bandeira and Joel Tropp for insightful discussions on the topic and pointing out the relevant literature. We are also grateful to Yuji Nakatsukasa for many helpful discussions on the trust region subproblem, and for kindly providing us the code for the algorithm in his paper [1]. This work was supported by EPSRC grant EP/N510129/1 at the Alan Turing Institute.
References
 [1] S. Adachi, S. Iwata, Y. Nakatsukasa, and A. Takeda. Solving the trustregion subproblem by a generalized eigenvalue problem. SIAM Journal on Optimization, 27(1):269–291, 2017.
 [2] A. Bhandari, F. Krahmer, and R. Raskar. On unlimited sampling. ArXiv eprints, arXiv:1707.06340v1, 2017.
 [3] I.M. Bomze, M. Budinich, P.M. Pardalos, and M. Pelillo. The maximum clique problem. In Handbook of Combinatorial Optimization, pages 1–74. Kluwer Academic Publishers, 1999.
 [4] D.J. Bone. Fourier fringe analysis: the twodimensional phase unwrapping problem. Appl. Opt., 30(25):3627–3632, 1991.
 [5] E.J. Candès, Y.C. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM Journal on Imaging Sciences, 6(1):199–225, 2013.
 [6] A. Collaro, G. Franceschetti, F. Palmieri, and M.S. Ferreiro. Phase unwrapping by means of genetic algorithms. J. Opt. Soc. Am. A, 15(2):407–418, 1998.
 [7] M. Cucuringu. SyncRank: Robust Ranking, Constrained Ranking and Rank Aggregation via Eigenvector and Semidefinite Programming Synchronization. IEEE Transactions on Network Science and Engineering, 3(1):58–79, 2016.
 [8] M. Fiedler. Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23(2):298–305, 1973.
 [9] N.I.M Gould, S. Lucidi, M. Roma, and P.L. L. Toint. Solving the trustregion subproblem using the lanczos method. SIAM Journal on Optimization, 9(2):504–525, 1999.
 [10] W.W. Hager. Minimizing a quadratic over a sphere. SIAM J. on Optimization, 12(1):188–208, 2001.
 [11] A. Hooper and H.A. Zebker. Phase unwrapping in three dimensions with application to insar time series. J. Opt. Soc. Am. A, 24(9):2737–2747, 2007.
 [12] K. Jaganathan, Y.C. Eldar, and B. Hassibi. Phase retrieval: An overview of recent developments. CoRR, abs/1510.07713, 2015.
 [13] R.M Karp. Reducibility among combinatorial problems. Complexity of Computer Computations, pages 85–103, 1972.
 [14] J.J. Morè and D.C. Sorensen. Computing a trust region step. SIAM Journal on Scientific and Statistical Computing, 4(3):553–572, 1983.
 [15] MIT News Office. Ultrahighcontrast digital sensing, 2017. http://news.mit.edu/2017/ultrahighcontrastdigitalsensingcameras0714.
 [16] B. Osmanoglu, T. H. Dixon, and S. Wdowinski. ThreeDimensional Phase Unwrapping for Satellite Radar Interferometry, I: DEM Generation. IEEE Transactions on Geoscience and Remote Sensing, 52(2):1059–1075, 2014.
 [17] M.D. Pritt. Phase Unwrapping by Means of Multigrid Techniques for Interferometric SAR. Geoscience and Remote Sensing, IEEE Transactions on, 34(3):728–738, 1996.
 [18] F. Rendl and H. Wolkowicz. A semidefinite framework for trust region subproblems with applications to large scale minimization. Mathematical Programming, 77(1):273–299, 1997.
 [19] M. Rojas, A.S. Santos, and D.C. Sorensen. A new matrixfree algorithm for the largescale trustregion subproblem. SIAM Journal on Optimization, 11(3):611–646, 2001.
 [20] A. Singer. Angular synchronization by eigenvectors and semidefinite programming. Appl. Comput. Harmon. Anal., 30(1):20–36, 2011.
 [21] D.C. Sorensen. Newtonâs method with a model trust region modification. SIAM Journal on Numerical Analysis, 19(2):409–426, 1982.
 [22] H. A. Zebker and Y. Lu. Phase unwrapping algorithms for radar interferometry: residuecut, leastsquares, and synthesis algorithms. Journal of the Optical Society of America A, 15:586–598, 1998.
Supplementary Material : On denoising noisy modulo samples of a function.
Appendix A Rewriting QCQP in real domain
Appendix B Trust region subproblem with ball/sphere constraint
Consider the following two optimization problems
(B.1) 
with being a symmetric matrix. (P1) is known as the trust region sub problem in the optimization literature and has been studied extensively with a rich body of work; (P2) is closely related to (P1) with a nonconvex equality constraint. There exist algorithms that efficiently find the global solution of (P1) and (P2), to arbitrary accuracy. In this section, we provide a discussion on the characterization of the solution of these two problems.
To begin with, it is useful to note for (P1) that

If the solution lies in the interior of the feasible domain, then it implies . This follows from the second order necessary condition for a local minimizer.

In the other direction, if then the solution will always lie on the boundary.
Surprisingly, we can characterize the solution of (P1), as shown in the following
Lemma 6 ([21]).
is a solution to (P1) iff and such that (a) , (b) and (c) . Moreover, if , then the solution is unique.
Note that if the solution lies in the interior, and if is p.s.d and singular, then there will also be a pair of solutions on the boundary of the ball. This is easily verified. The solution to (P2) is characterized by the following
Lemma 7 ([10, 21]).
is a solution to (P2) iff and such that (a) and (b) . Moreover, if , then the solution is unique.
The solution to (P1), (P2) is closely linked to solving the non linear equation . Let
(B.2) 
where