Fast Signal Recovery from Saturated Measurements by Linear Loss and Nonconvex Penalties

Fast Signal Recovery from Saturated Measurements by Linear Loss and Nonconvex Penalties

Abstract

Sign information is the key to overcoming the inevitable saturation error in compressive sensing systems, which causes information loss and results in bias. For sparse signal recovery from saturation, we propose to use a linear loss to improve the effectiveness from existing methods that utilize hard constraints/hinge loss for sign consistency. Due to the use of linear loss, an analytical solution in the update progress is obtained, and some nonconvex penalties are applicable, e.g., the minimax concave penalty, the norm, and the sorted norm. Theoretical analysis reveals that the estimation error can still be bounded. Generally, with linear loss and nonconvex penalties, the recovery performance is significantly improved, and the computational time is largely saved, which is verified by the numerical experiments.

compressive sensing, saturation, linear loss, nonconvex penality, ADMM.

I Introduction

\lettrine

[lines=2]Saturation is unavoidable in many sensing systems due to the limited range of detectors or analog-to-digital converters (ADC) [1]. When there are saturated measurements, the observation is nonlinear, and the performance of algorithms using linear observations degrades. We model a measurement from a linear system as , where is the true signal, is a sensing vector and is the noise. Then the observation with a bounded-range detector or ADC becomes

where and are the upper and lower bounds, respectively. We partition the observations and the sensing matrix into the unsaturated and saturated parts:

  1. The unsaturated part: observations and the corresponding sensing matrix . Clearly, for ;

  2. The saturated part: observations and the corresponding sensing matrix .All the observations in this part are out of the range and are recorded as or .

In addition, an indicator vector is defined below.

Similarly, we partition into two parts and . when and otherwise.

In this letter, we consider signal recovery from saturated measurements, which is hard due to the loss of information. Compressive sensing (CS, [2]) is a promising technique for signal recovery from a relatively small number of observations. It has been insightfully studied and successfully applied in the last decade [3, 4, 5, 6, 7]. However, the traditional CS is not applicable to deal with saturation. Two methods for saturation in CS are saturation rejection [8, 9] and saturation consistency [10, 11, 12, 13, 14]. Saturation rejection drops the saturated part and thus may lead to insufficient measurements and poor results. To increase the accuracy, we need to make good use of the saturated part. Although the exact values for the saturated observations are unknown, we can put this information in inequality constraints or loss functions. This motivates many algorithms, especially in the extreme one-bit case  [15, 16, 17, 18].

The linear inequality constraint is obviously true for noise-free cases and thus is used in [10, 11] to enforce the saturation consistency. Specifically, robust dequantized compressive sensing (RDCS) is proposed in [11] and takes the following form

where is a parameter. (RDCS is for both quantization error and saturation error, of which the former is out of the scope of this letter and hence the corresponding items are ignored here.) When there are changes of the binary observations, namely sign flips, during the measurement and transmission, the constraints are not satisfied by the true signal. Therefore, RDCS is not robust to sign flips. Instead of constraints, the hinge loss function is used in mixed one-bit compressive sensing (M1bit-CS) [12] to encourage the saturation consistency, and the method is robust to noise. However, the algorithms for both RDCS and M1bit-CS are slow and not applicable to large-scale problems, due to the use of hard constraints or the hinge loss. Also, it is less likely to include non-convex penalties that improve the recovery accuracy of sparse signals.

We propose to use the linear loss for sign consistency and develop fast algorithms with non-convex penalty functions for sparse signal recovery. We formulate it into a form such that the alternating direction method of multipliers (ADMM, [19]) can be applied. Then based on the results in [20, 21], the subproblems with non-convex penalties such as the norm, the sorted norm [22, 23], and the minimax concave penalty (MCP, [24]) have analytical solutions or can be solved easily.

Ii Sparse Signal Recovery from Saturated Measurements

To deal with saturation, we introduce the linear loss instead of the hard constraints and the hinge loss in existing works. The problem with linear loss is:

(1)

where is a regularization term for sparsity, is a trade-off parameter between the unsaturated and saturated parts, and is a given upper bound for . Note that the constraint is crucial when there are many saturated observations, because one-bit information has no capability to distinguish amplitudes. Here we assume that the norm of the true signal is given as . There are algorithms for estimating the norm of the true signal if it is not given [25].

The key difference between this model and the model in [12] is the use of linear loss. Thus we call (1) as mixed one-bit CS with linear loss (M1bit-CS-L). The motivation comes from the good properties of linear loss that it does not bring computational burden in optimization. Specifically, a subproblem for M1bit-CS-L has a closed-form solution, while that for RDCS and M1bit-CS does not, from which it follows that M1bit-CS-L can be solved as efficiently as standard CS.

Before introducing the algorithms, we show some theoretical results of M1bit-CS-L in terms of the error bound , where is the optimal solution and is the true signal. Assume and without loss of generality. Then the constraint in (1) is .

Assumption 1.

The true signal satisfies that

with . Moreover, the expectation of the noise of the unsaturated part is zero, i.e.,

Assumption 2.

Each row of is an independent realization from a normal distribution, and the element in is independently drawn at random.

Following [26], we define a function to model the noise in as

Lemma 1.

Let

then

Let

Then based on Lemma 1, we prove:

Lemma 2.

Suppose Assumptions 1 and 2 hold and is given. With probability at least , the following holds:

The proofs of Lemma 1 and Lemma 2 are presented in the supplemental document. Based on these lemmas, we bound the error for M1bit-CS-L by the following theorem.

Theorem 1.

Suppose Assumptions 1 and 2 hold, is the optimal solution of (1), and is the underlying signal.

If , with probability at least ,

If , with probability at least ,

Proof.

Since is a feasible solution of (1) and is the optimal solution of (1), we obtain:

The convexity of gives:

where the last inequality follows from Lemma 2.

In the following, we define as the support set of and be its complement, i.e. .

First, let , and we have

(2)

Thus, using and , we obtain

Therefore,

which implies .

Next, let with the same in Lemma 2. We have a similar inequality to (2):

From the definiton of and , we have and

Thus, we obtain

which implies that . Then the theorem is proved. ∎

Iii Fast Algorithms for Convex and Non-convex Penalties

In this section, we design fast algorithms for both convex and non-convex penalties in the framework of the alternating direction method of multipliers (ADMM) to solve (1). Introducing an auxiliary vector and an additional constraint , we have the following equivalent problem of (1):

The corresponding augmented Lagrangian is

where is the indicator function returning 0 if and otherwise. Then we establish the following two subproblems to update and , respectively.

  1. -subproblem:

    It is a quadratic problem, and its solution is

  2. -subproblem:

    which can be reformulated as:

According to [20], analytical solutions exist for many convex and nonconvex penalty functions. Some examples are:

  • norm (L1): .

  • penalty (L0): .

  • minimax concave penalty (MCP): , where is defined as:

  • nonconvex sorted norm (sL1): , where , and is a permutation of such that .

More examples can be found in [20].

The ADMM algorithm is described in Alg. 1.

0:  
0:  
1:  initialize
2:  repeat
3:     
4:     Solve the -subproblem using [20]
5:     
6:  until the stopping criteria is satisfied
Algorithm 1 ADMM for M1bit-CS-L

ADMM is also applied in [11] and [12] for RDCS and M1bit-CS, respectively, but since the corresponding subproblems with both hard constraints and hinge losses do not have analytical updates, the algorithm for M1bit-CS-L, even with non-convex penalties, is much faster than both RDCS and M1bit-CS.

Iv Numerical Experiments

We generate the data in the following steps: i) generate a -sparse signal in with nonzero components following the normal distribution; ii) normalize the signal such that ; iii) generate a sensing matrix with elements following independently; iv) add independent Gaussian noise to the measurements, where the noise level is the ratio of the variance of the noise to that of the noise-free measurements; v) set the saturation thresholds such that measurements are saturated.

The task is to recover the sparse signal from unsaturated and saturated measurements. The proposed Alg. 1 with four sparse penalties discussed previously will be compared with RDCS, M1bit-CSC, and LASSO [27]. There is a parameter to tune the sparsity in each algorithm. For RDCS, M1bit-CSC, LASSO, and Alg.1-L1, we use the same value, which is chosen by cross-validation based on LASSO. For other methods, we choose parameters to make the numbers of non-zero compontents are no more than that of -norm minimization. The signal-to-noise ratio (SNR) and the angular error (AE) are used as error metrics, and they are defined as

where is the true signal and is the recovered one. All the experiments are conducted on Matlab R2016b in Windows 7 with Core i5-3.20 GHz and 4.0 GB RAM.

First, we set and vary the saturation ratio from to . The average SNR and AE over 100 trials are shown in Fig. 1. The disadvantage of LASSO is obvious as the saturated ratio increases. Once the number of unsaturated measurement is insufficient, the accuracy dramatically drops. In contrast, the declines in accuracy of other methods are relatively slow due to the knowledge obtained from saturation measurements. The signal recovery accuracy of M1bit-CSC and Alg.1-L1 are similar, which coincides with our analysis in Section II that using linear loss has little negative impact on performance. However, as shown in Table I, Alg. 1 is generally 10 times faster than RDCS and M1bit-CSC because of the analytical update.

(a)
(b)
Fig. 1: (a) AE and (b) SNR averaged over 100 trials for different saturation ratio ().
Methods
M=500
N=1000
M=1000
N=1000
M=500
N=2000
M=1000
N=2000
M=1500
N=2000
LASSO 0.0121 s 0.0436 s 0.0296 s 0.1333 s 0.1563 s
RDCS 0.9355 s 0.5129 s 8.5300 s 7.9630 s 5.5730 s
M1bit-CSC 0.9627 s 1.0500 s 8.5410 s 9.2600 s 8.7560 s
Alg.1–sL1 0.0929 s 0.1340 s 0.3713 s 0.5177 s 0.7089 s
Alg.1–MCP 0.1306 s 0.1663 s 0.5907 s 0.7127 s 0.7265 s
Alg.1–L0 0.1073 s 0.1430 s 0.5604 s 0.6758 s 0.6958 s
Alg.1–L1 0.1004 s 0.1375 s 0.5548 s 0.6671 s 0.6854 s
TABLE I: Average computational time with different when .

With the use of nonconvex penalties, the reconstruction performance is significantly improved by enhancing the sparsity, which is shown in Fig. 2 with and changing number of measurements. Alg.1-sL1 and Alg.1-MCP outperform other methods on both AE and SNR except in the case of insufficient , where no method can recover reasonable signals. The -norm is the true sparsity measurement. However, its optimization is easy to be trapped in a bad local optimum. Thus, the average performance of Alg.1-L0 is not very good. In Fig. 2, one can also observe the effectiveness of using saturated information. For example, when the number of total measurements is 800, with saturation ratio being 15%, LASSO actually uses 680 unsaturated measurements. However, with 680 measurements, including both saturation ones and unsaturated ones, other models give comparable results.

(a)
(b)
Fig. 2: (a) AE and (b) SNR averaged over 100 trials for different numbers of measurements ().

Last, we evaluate the performance of these methods when the sparsity changes in Fig. 3 with . The performance again confirms the effectiveness of the proposed algorithms: i) the use of linear loss for one-bit information does not decrease the reconstruction performance; ii) the use of suitable non-convex penalties does improve the reconstruction quality.

(a)
(b)
Fig. 3: (a) AE and (b) SNR averaged over 100 trials for different numbers of non-zero components ().

V Conclusion

To recover sparse signal from sensing systems with saturation, the information contained in the saturated part is very important. We propose minimizing the linear loss for saturation consistency. Linear loss can be efficiently minimized by the proposed algorithm, and it allows the use of non-convex penalties to further enhance the sparsity. The error estimation given in this letter also theoretically guarantees the good performance of linear loss. Numerical experiments indicate the good performance of the proposed method on both accuracy and efficiency.

References

  1. J. Haboba, M. Mangia, F. Pareschi, R. Rovatti, and G. Setti, “A pragmatic look at some compressive sensing architectures with saturation and quantization,” IEEE J. Emerg. Sel. Topic Circuits Syst, vol. 2, no. 3, pp. 443–459, 2012.
  2. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  3. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, 2006.
  4. E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5406–5425, 2006.
  5. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 21–30, 2008.
  6. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse mri: The application of compressed sensing for rapid mr imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007.
  7. J. Provost and F. Lesage, “The application of compressed sensing for photo-acoustic tomography,” IEEE Trans. Med. Imag., vol. 28, no. 4, pp. 585–594, 2009.
  8. L. Jacques, D. K. Hammond, and J. M. Fadili, “Dequantizing compressed sensing: When oversampling and non-gaussian constraints combine,” IEEE Trans. Inf. Theory, vol. 57, no. 1, pp. 559–571, 2011.
  9. J. N. Laska, P. Boufounos, and R. G. Baraniuk, “Finite range scalar quantization for compressive sensing,” RICE UNIV HOUSTON TX, Tech. Rep., 2009.
  10. J. N. Laska, P. T. Boufounos, M. A. Davenport, and R. G. Baraniuk, “Democracy in action: Quantization, saturation, and compressive sensing,” Applied and Computational Harmonic Analysis, vol. 31, no. 3, pp. 429–443, 2011.
  11. J. Liu and S. J. Wright, “Robust dequantized compressive sensing,” Applied and Computational Harmonic Analysis, vol. 37, no. 2, pp. 325–346, 2014.
  12. X. Huang, Y. Xia, L. Shi, Y. Huang, M. Yan, J. Hornegger, and A. Maier, “Mixed one-bit compressive sensing with applications to overexposure correction for ct reconstruction,” arXiv preprint arXiv:1701.00694, 2017.
  13. S. Foucart and T. Needham, “Sparse recovery from saturated measurements,” p. iaw020, 2017.
  14. S. Foucart and J. Li, “Sparse recovery from inaccurate saturated measurements,” Acta Applicandae Mathematicae, no. 6, pp. 1–18, 2018.
  15. P. T. Boufounos and R. G. Baraniuk, “1-bit compressive sensing,” in Information Sciences and Systems, 2008. CISS 2008. 42nd Annual Conference on.   IEEE, 2008, pp. 16–21.
  16. A. Gupta, R. Nowak, and B. Recht, “Sample complexity for 1-bit compressed sensing and sparse classification,” in Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on.   IEEE, 2010, pp. 1553–1557.
  17. M. Yan, Y. Yang, and S. Osher, “Robust 1-bit compressive sensing using adaptive outlier pursuit,” IEEE Trans. Signal Process., vol. 60, no. 7, pp. 3868–3875, 2012.
  18. L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Trans. Inf. Theory, vol. 59, no. 4, pp. 2082–2102, 2013.
  19. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.
  20. X. Huang and M. Yan, “Nonconvex penalties with analytical solutions for one-bit compressive sensing,” Signal Processing, vol. 144, pp. 341–351, 2018.
  21. R. Zhu and Q. Gu, “Towards a lower sample complexity for robust one-bit compressed sensing,” in International Conference on Machine Learning, 2015, pp. 739–747.
  22. X. Huang, L. Shi, and M. Yan, “Nonconvex sorted minimization for sparse approximation,” Journal of the Operations Research Society of China, vol. 3, no. 2, pp. 207–229, 2015.
  23. M. Bogdan, E. Van Den Berg, C. Sabatti, W. Su, and E. J. Candés, “Slope-adaptive variable selection via convex optimization,” The Annals of Applied Statistics, vol. 9, no. 3, p. 1103, 2015.
  24. C.-H. Zhang, “Nearly unbiased variable selection under minimax concave penalty,” The Annals of Statistics, vol. 38, no. 2, pp. 894–942, 2010.
  25. K. Knudson, R. Saab, and R. Ward, “One-bit compressive sensing with norm estimation,” IEEE Trans. Inf. Theory, vol. 62, no. 5, pp. 2748–2758, 2016.
  26. Y. Plan and R. Vershynin, “Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach,” IEEE Trans. Inf. Theory, vol. 59, no. 1, pp. 482–494, 2013.
  27. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society, vol. 73, no. 3, pp. 267–288, 2011.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
302537
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description