Noise Invalidation Denoising
Abstract
A denoising technique based on noise invalidation is proposed. The adaptive approach derives a noise signature from the noise order statistics and utilizes the signature to denoise the data. The novelty of this approach is in presenting a general-purpose denoising in the sense that it does not need to employ any particular assumption on the structure of the noise-free signal, such as data smoothness or sparsity of the coefficients. An advantage of the method is in denoising the corrupted data in any complete basis transformation (orthogonal or non-orthogonal). Experimental results show that the proposed method, called Noise Invalidation Denoising (NIDe), outperforms existing denoising approaches in terms of Mean Square Error (MSE).
hresholding, order statistics, confidence region.
1 Introduction
Data denoising approaches are well known methods with presence in diverse applications and have been studied and developed in research areas ranging from communications to biomedical signal analysis. In denoising techniques, multi-resolution representations of the data is generally used. For example, wavelet shrinkage is based on rejecting those wavelet coefficients that are smaller than a certain value and keeping the remaining coefficients. Thus, the problem of removing noise from a set of observed data is transformed into finding a proper threshold for the data coefficients. The pioneer shrinkage methods, such as VisuShrink and SureShrink, propose thresholds that are functions of the noise variance and the data length [1, 2]. Over the past fifteen years, several thresholding approaches such as [3, 4, 5, 6, 7] have been developed. These methods provide optimum thresholds by focusing on certain properties of the noise-free signal, and they are proposed for particular applications, mostly for the purpose of image denoising. Unlike these approaches, the method presented in this paper focuses only on the properties of the additive noise. By relying on the noise statistics, the method defines a probabilistic region of confidence for the noise coefficients. Consequently, it validates those observed coefficients that are out of the noise confidence region and contain noiseless dominant parts. In the presence of additive colored noise, the required data preprocessing and/or whitening filters may damage the desired noiseless data. Dealing with the colored noise without whitening is especially critical if the noise correlation is due to the non-orthogonality of the basis. The proposed method denoises such data by using only the colored noise statistics in the invalidation step. The principle behind the proposed approach is simple yet powerful as it takes advantage of the properties for the additive noise and demonstrates efficiency in applications for general-purpose denoising.
The paper is organized as follows. In Section 2 the considered problem is formulated and revisits the philosophy of thresholding. Section 3 derives a noise signature from the noise statistics. Section 4 presents the proposed noise invalidation approach. In Section 5 simulation results are provided, and conclusions are drawn in Section 6.
Notations: We use capital letters , , and for random variables. Samples of these variables are shown with small letters , , and .
2 Problem Formulation and Motivation
Noise-free data vector of length , is corrupted by an additive white Gaussian random process with zero mean and variance of . The observed data is
(1) |
and can be
expressed in terms of a desired orthonormal basis such that
(2) |
where is an element of orthonormal basis and the desired unavailable coefficient is
(3) |
Therefore, the following holds
(4) |
and thanks to the orthonormality of the selected basis S, the noise coefficients,
(5) |
in the available coefficients
(6) |
are also independent identically distributed random variables with the same mean and variance of the noise
(7) |
The observed coefficients are soft thresholded by threshold
(8) |
The estimate of the noise-free data with this threshold is
(9) |
We provide the optimum threshold by a noise invalidation process.
2.1 Main Idea of Thresholding
Traditionally, the main idea in thresholding is to choose a value for which all the coefficients with absolute values smaller than that value are thrown away. The rejected coefficients are the ones that we cannot fully determine their contribution to the noise-free data and therefore, we decide to dismiss them. Nevertheless, there is always a chance that some samples of the pure noise are above the threshold and some noise-free coefficients get buried among the discarded coefficients due to the additive noise. The existing thresholding methods usually focus on a class of signals, such as images, to provide a proper threshold and evaluate the quality of the thresholds based on some performance criterion such as the mean square error (MSE). Among the thresholding methods, the ones with a more general assumption on the noise-free data are VisuShrink and SureShrink, which rely on some form of piecewise smoothness of the wavelet coefficients. The rest of the thresholding methods are application oriented, for example the BayesShrink or other image denoising methods that use properties such as generalized Gaussian distribution (GGD) for the noise-free image itself. Another example is the adaptive denoising approach in [4] for neural network applications.
Here, we present a general-purpose thresholding approach that does not need to exploit any particular property of the noise-free data itself. Consequently, the thresholding approach should be invariant with respect to the order of the data. So if the data is reordered and put in an ascending order based on its absolute value, the optimum threshold should remain the same. In this case, choosing the th coefficient of the ordered data as the threshold is equivalent to throwing away the first coefficients.
For a general-purpose threshold, instead of concentrating on the
properties of the remaining coefficients after thresholding, it is
logical to focus on the dismissed coefficients. These coefficients
are discarded because they are attributed to the noise or very
noisy coefficients. It is rational to equivalently state that
these coefficients are discarded since they behave similarly to a
set of coefficients that can be generated by an associated
Gaussian distribution of the additive noise. In the following
section we present one of the signatures of a set that is
generated by this Gaussian distribution
Additive Noise Variance: Thresholding methods, such as SureShrink and VisuShrink, rely heavily on the value of the additive noise variance . In most practical applications this value is not known. The estimate of the standard deviation is usually provided by MAD approach where , and where MAD is the median of absolute value of normalized fine scale wavelet coefficients [2]. The method provided in this paper also requires the estimate of noise variance, and we use the MAD method for this estimation.
3 Additive Noise Signature
Consider the additive noise random variable in (7) with zero mean and finite variance. Define the signature function for any value and as such that the mean and variance of this function over are finite values:
(10) | |||||
(11) |
The signature for samples of a random process of length , , with IID members that have the same distribution of is defined as
(12) |
It follows that the expected value and variance of the signature are
(13) | |||||
(14) |
Detailed are shown in Appendix 7. For a large data length, while the mean is a finite fixed value, the variance becomes smaller. The use of such signatures in invalidation of the additive noise is explored with the following example.
3.1 Signature Example: Absolute Noise Sorting (ANS)
Consider a noise signature with the following form
(15) |
In this case, for the signature of the IID random process in (12) we have [10, 11, 12]:
where is the cdf of absolute value of the additive noise
(16) |
Details are provided in Appendix 8.
For each , the value of where is the number of samples of with absolute values smaller than . Equivalently, when sorting , the th value is the largest that is smaller than . Therefore, when sorting the s, the index is . Figure 1 illustrates the effect of sorting and the role of the small variance in providing a noise signature. The figure shows the behavior of 100 samples of Gaussian noise with unit variance and length 2048. As the top figure shows, with a very high probability, the values of this data are bounded between .
However, if we sort the same data based on its absolute value in the middle figure, the values collapse in a much denser area. Such behavior can be explained by the ANS signature as follows. The bottom figure show the result of swapping the horizontal and vertical axes of the middle figure. Here the horizontal axis is and the vertical shows 100 samples of where . As it is expected, these values are around mean with variance . This will allow us to define proper confidence regions, with a high probability , around the noise signature. Due to the noise signature structure, these regions are considerably smaller than the corresponding confidence regions of the Gaussian distribution of the additive noise itself.
Therefore, for each and for a high confidence probability , we can find and around the mean value of such that
(17) |
For example, Figure 2 shows the bounds on for confidence probability p=0.999997 and with .
3.2 Confidence Region and Gaussian Estimate
While it is straightforward to make a table of values of the
boundaries shown in Figure 2, it is possible to
use Gaussian estimates for the distributions of
for large enough values of as in (12) is average of
independent variables. Using the Central Limit Theorem for this distribution, we have
(19) |
This estimates the boundaries in (17) to be
(20) | |||
(21) |
The choice
of should be such that the probability is close to one
and at the same time the boundary is not very loose. In statistics
the three-sigma rule, or empirical rule, states that for a normal
distribution, almost all values lie within three standard
deviations of the mean
4 Noise invalidation with Absolute Coefficient Sorting (ACS)
The coefficients of our observed data is in form of which has the same structure as the noise except its mean which is the noiseless coefficient . In this case
(23) | |||
(24) |
where
(25) |
Details are provided in Appendix 9.
Sorting the coefficients in this case is analogous to calculation of
(26) |
which according to (23) and (24) has the following mean and variance
(27) | |||||
(28) |
Since the value of in (25) is bounded between zero and one, the variance of this value is much less than its mean for large values of . Therefore, a dense area will cover the sorted data with a high probability. This area becomes distinguished from the area covered by the sorted noise-only signal as the value of grows and as the nonzero coefficients become effective. This performance is illustrated in Figure 5 which shows the area covered by the sorted noisy data for Blocks signal for when SNR=5. The figure also shows the behavior of the sorted noise-only data. As it can be seen with probability there is no overlap between the sorted noise and sorted noisy data after a certain value of .
4.1 Noise Invalidation in Application
Using the noise sorting signature, it is possible to invalidate the noisy coefficients with a high confidence. Figure 6 shows the application of the method. The confidence region for the noise-only data is available upon knowing or estimating the noise variance. As the sorted absolute noisy data leaves the noise confidence region, it assures that the coefficients are becoming more effective than the noisy part of the data. The largest value for which the departure occurs is the optimum threshold () for the noise validation problem
(29) |
4.2 Colored Noise and Thresholding
Corollary: If the additive noise is colored, the expected value of and will remain the same as these expected values for the white noise in (3.1) and (27). For the variance of the sorted noise we have
(30) | |||
(31) |
where
(32) |
with .
Proof: In Appendix 10.
The variance for the sorted noisy data is also provided in Appendix 10. As the variance indicates, the wider is the autocorrelation of the noise process with itself, the wider is the signature region of the noise and noisy data and therefore, as it is expected, it may become more difficult to distinguish the data from the noise.
5 Simulation Results
We perform our denoising methods on noisy versions of six standard signals, Blocks, Mishmash, Bumps, and Quadchirp which are the test signals introduced in [1]. Five level decomposition with Haar wavelet is chosen for this experiment. The confidence probability of the methods for noise invalidation region is . Figure 7 shows the six signals and their coefficients. As this figure confirms, the test signals represent a wide range of possible coefficient structures. For example Figure 8 shows the coefficient distribution of some of these signals. while signals such as Blocks have very few nonzero coefficients and many coefficients close to zero, signals such as mishmash have more uniformly distributed coefficients. Blocks and MishMash signals represent two extreme structures, while QuadChirp have a combined structure of both of these signals.
We compare the proposed method with Visushrink, Sureshrink which are more general-purpose thresholding approaches. On the other hand, Sure-LET and BayeshShrink are image denoising methods that are performing well with the one dimensional signals. We also consider these method for comparison. We compare the performance of the methods based on their normalized reconstruction mean square error (MSE) which is
(33) |
where (9) is the resulted denoised data and is the noise-free data. Table 1 provides the MSE of the compared methods. As the table shows, NIDe performs better than the other approaches in most of the cases.
The methods are compared for a range of SNRs both for white or colored noise. Autocorrelation of the considered colored noise is shown in Figure 9. The results of average of 100 runs are provided in Table 1. It is important to note that the MSEs for all the methods have small standard deviation, much smaller than the MSE itself. As the table shows, three methods NIDe, BayesShrink and Sure-Let are comparable in presence of additive white noise. It is worth mentioning that NIDe outperforms the other two methods for the more sparse signals of times for even a wider range of SNR than the range shown in the table. For the non-sparse ones such as Quad-Chirp and Mishmash, however, Sure-let performs slightly better than the other two methods with additive white noise. For the additive colored noise, NIDe is consistently outperforming the other methods.
Figures 10 show the denoised versions of Blocks with these methods for white and colored additive noise. As the figures show and Table I confirms, the denoised data with NIDe and BayesShrink in presence of a white noise are comparable while the other methods have larger MSE. In presence of colored noise, however, NIDe method outperforms the other methods.
Visu | SURE | Bayes | SURE | NIDe | |
Shrink | -LET | ||||
Blocks | |||||
SNR=1 | 0.168 | 0.657 | 0.111 | 0.124 | 0.066 |
SNR=4 | 0.124 | 0.259 | 0.058 | 0.065 | 0.042 |
SNR=8 | 0.086 | 0.032 | 0.026 | 0.027 | 0.020 |
SNR=10 | 0.072 | 0.015 | 0.015 | 0.017 | 0.013 |
SNR=14 | 0.048 | 0.007 | 0.007 | 0.007 | 0.005 |
Bumps | |||||
SNR=1 | 0.155 | 0.427 | 0.098 | 0.122 | 0.091 |
SNR=4 | 0.127 | 0.065 | 0.073 | 0.063 | 0.070 |
SNR=8 | 0.096 | 0.026 | 0.023 | 0.027 | 0.025 |
SNR=10 | 0.083 | 0.023 | 0.019 | 0.018 | 0.017 |
SNR=14 | 0.062 | 0.017 | 0.012 | 0.009 | 0.009 |
HeavySin | |||||
SNR=1 | 0.137 | 0.670 | 0.029 | 0.115 | 0.028 |
SNR=4 | 0.096 | 0.268 | 0.017 | 0.057 | 0.017 |
SNR=8 | 0.063 | 0.032 | 0.010 | 0.023 | 0.009 |
SNR=10 | 0.050 | 0.007 | 0.016 | 0.015 | 0.007 |
SNR=14 | 0.035 | 0.004 | 0.003 | 0.006 | 0.003 |
Doppler | |||||
SNR=1 | 0.798 | 0.098 | 0.069 | 0.126 | 0.078 |
SNR=4 | 0.659 | 0.085 | 0.085 | 0.067 | 0.076 |
SNR=8 | 0.493 | 0.078 | 0.036 | 0.030 | 0.032 |
SNR=10 | 0.424 | 0.076 | 0.029 | 0.020 | 0.024 |
SNR=14 | 0.308 | 0.071 | 0.012 | 0.010 | 0.009 |
QuadChirp | |||||
SNR=1 | 0.931 | 0.782 | 0.466 | 0.447 | 0.637 |
SNR=4 | 0.903 | 0.771 | 0.284 | 0.276 | 0.357 |
SNR=8 | 0.864 | 0.757 | 0.129 | 0.131 | 0.151 |
SNR=10 | 0.841 | 0.753 | 0.086 | 0.086 | 0.096 |
SNR=14 | 0.780 | 0.750 | 0.038 | 0.037 | 0.035 |
MishMash | |||||
SNR=1 | 0.926 | 0.523 | 0.517 | 0.462 | 0.693 |
SNR=4 | 0.906 | 0.430 | 0.318 | 0.286 | 0.374 |
SNR=8 | 0.865 | 0.432 | 0.146 | 0.136 | 0.156 |
SNR=10 | 0.836 | 0.623 | 0.095 | 0.089 | 0.099 |
SNR=14 | 0.752 | 0.641 | 0.040 | 0.039 | 0.039 |
Visu | SURE | Bayes | SURE | NIDe | |
Shrink | -LET | ||||
Blocks | |||||
SNR=1 | 0.406 | 0.695 | 0.562 | 6.212 | 0.385 |
SNR=4 | 0.212 | 0.334 | 0.286 | 3.107 | 0.210 |
SNR=8 | 0.098 | 0.124 | 0.121 | 1.214 | 0.092 |
SNR=10 | 0.063 | 0.074 | 0.079 | 0.753 | 0.060 |
SNR=14 | 0.030 | 0.035 | 0.037 | 0.295 | 0.024 |
Bumps | |||||
SNR=1 | 0.455 | 0.648 | 0.559 | 6.443 | 0.427 |
SNR=4 | 0.279 | 0.305 | 0.293 | 3.220 | 0.245 |
SNR=8 | 0.105 | 0.115 | 0.125 | 1.273 | 0.098 |
SNR=10 | 0.072 | 0.075 | 0.081 | 0.773 | 0.072 |
SNR=14 | 0.035 | 0.038 | 0.035 | 0.299 | 0.030 |
HeavySin | |||||
SNR=1 | 0.402 | 0.712 | 0.531 | 6.527 | 0.334 |
SNR=4 | 0.209 | 0.328 | 0.270 | 3.256 | 0.173 |
SNR=8 | 0.085 | 0.117 | 0.105 | 1.280 | 0.071 |
SNR=10 | 0.057 | 0.069 | 0.068 | 0.803 | 0.047 |
SNR=14 | 0.027 | 0.023 | 0.028 | 0.317 | 0.020 |
Doppler | |||||
SNR=1 | 0.446 | 0.399 | 0.569 | 6.431 | 0.401 |
SNR=4 | 0.330 | 0.236 | 0.293 | 3.201 | 0.235 |
SNR=8 | 0.231 | 0.138 | 0.127 | 1.207 | 0.110 |
SNR=10 | 0.187 | 0.113 | 0.082 | 0.76 | 0.075 |
SNR=14 | 0.133 | 0.089 | 0.037 | 0.038 | 0.030 |
QuadChirp | |||||
SNR=1 | 0.964 | 1.183 | 1.139 | 1.298 | 0.594 |
SNR=4 | 0.978 | 0.965 | 0.489 | 0.536 | 0.326 |
SNR=8 | 0.986 | 0.833 | 0.164 | 0.186 | 0.137 |
SNR=10 | 0.990 | 0.801 | 0.100 | 0.117 | 0.088 |
SNR=14 | 0.980 | 0.771 | 0.039 | 0.051 | 0.037 |
MishMash | |||||
SNR=1 | 0.974 | 1.186 | 1.205 | 1.246 | 0.607 |
SNR=4 | 0.956 | 0.889 | 0.523 | 0.523 | 0.336 |
SNR=8 | 0.961 | 0.727 | 0.174 | 0.180 | 0.142 |
SNR=10 | 0.963 | 0.699 | 0.105 | 0.109 | 0.091 |
SNR=14 | 0.963 | 0.675 | 0.042 | 0.041 | 0.031 |
6 Conclusion
A denoising approach based on direct invalidation of the coefficients is proposed. The invalidation process uses a signature of the additive noise in the form of a probabilistic confidence region. The signature is defined based on the statistical properties of the additive noise and is such that its standard deviation is much smaller than its mean. In this work we provided one example of such signature which illustrates itself in simply sorting the coefficients. It was shown that such a signature represents the noise in a dense area. The density of the area depends on the noise variance and the data length. The smaller is the noise variance and/or the longer is the data length, the denser is the signature area. This will enable us to invalidate whether every coefficient is noise dominant or data dominant. The theory of the method shows its strength for any type of noise-free signal. The variance of the signature decreases as the data length grows, providing more distinguish and denser area. The method denoises in presence of not only an additive white noise, but also an additive colored noise This will also enable the user to use the proposed thresholding technique with any non-orthogonal basis. Simulation results confirmed the advantages of the proposed noise invalidation approaches in terms of the reconstruction MSE and illustrates its robust performance. While the proposed NIDe approach is a pioneer method for a general-purpose thresholding, there seems to be a great potential in further analysis, study, and expansion of such invalidation method for the purpose of denoising. For example, it is worth investigating further functions of different noise distributions that can serve as the noise signature. In addition, for the cases that the noise-free signal belongs to a particular class of signals, a potential extension would be to combine the statistical properties of the noise-free signal with the statistical structure of the additive noise in order to define noise signatures for the purpose of probabilistic invalidation.
7 Mean and Variance of the Signature for IID noise
8 Mean and Variance of the Sorting Signature for IID Noise
9 Mean and Variance of the Data with Sorting Signature
10 Mean and Variance of the Signature for Colored Noise
In this case the autocorrelation between the zero mean Gaussian s is denoted by .
(48) |
For the expected value of this function we have
(49) |
which is similar to that of the IID additive noise. However, for the variance, since the following holds
we have
(50) |
where elements of the second term are
(51) |
where the second term is simply , the first term is . For the first term, the joint distribution of and is a Gaussian distribution with zero mean and variance
(52) |
where
(53) |
with . The decomposition of the covariance matrix is as follows
(54) |
Therefore, by the following transformation
(55) |
the two and random variables are independent and with variances and . As a result, for the first term of the covariance
(56) | |||
(57) | |||
(58) |
Figure 11 show the area considered for the calculation of this probability.
10.1 Noisy Data
With similar analogy, for the signal in presence of the noisy data, we have
(59) |
where was defined in (25). For the variance of the sorted absolute value of the noisy data, similarly (46) holds. Therefore, structure of the variance is similar to (50):
(60) |
For the covariance in the second term
(61) |
we use (59) to calculate and have
(62) | |||
(63) | |||
(64) |
Footnotes
- Inner product of real vectors and is denoted by .
- Probabilistic
approach for finding a significant noise-free component in a
coefficients is discussed in a data denoising approach in
[8]. However, in this
approach a Laplacian prior for a noise-free data is assumed.
Preliminary work related to the proposed method is presented in [9] - The error function is (18)
- The following is known for and (22)
References
- D. L. Donoho and I. M. Johnstone, “Ideal spatial adaption via wavelet shrinkage,” Biometrika, vol. 81, pp. 425-455, 1994.
- D. L. Donoho and I. M. Johnstone, “Adapting to unknown smoothness via wavelet shrinkage,” Journal of the American Statistical Assos. vol 90, pp.1200-1224, 1995
- T. Blu and F. Luisier, “The SURE-LET Approach to Image Denoising,” IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2778-2786, November 2007.
- X-P. Zhang, “Thresholding neural network for adaptive noise reduction,” IEEE Transactions on Neural Networks, vol. 12, no. 3, pp. 567-584, May 2001.
- S.G. Chang, B. Yu , and M. Vetterli, “Adaptive wavelet thresholding for image denoising and compression,” IEEE Transactions on Image Processing, vol. 9, no. 9, pp. 1532-1546, Sept. 2000.
- S. G. Chang, B. Yu, and M. Vetterli, “Spatially adaptive wavelet thresholding with context modeling for image denoising,” IEEE Transactions on Image Processing, vol. 9, no. 9, pp. 1522-1531, Sept. 2000.
- F. Abramovitch, T. Sapatinas, and B.W. Silverman, “Wavelet thresholding via a Bayesian approach,” J. Roy. Statist. Soc., ser. B, vol. 60, no. 4, pp. 725-749, 1998.
- A. Pizurica and W. Philips, “Estimating the probability of the presence of a signal of interest in multiresolution single- and multiband image denoising”, IEEE Transactions on Image Processing, vol. 15, no. 3, pp. 654-665 , March 2006.
- S. Beheshti, N. Nikvand, and X.N. Fernando, “Soft thresholding by noise invalidation,” Proceedings of the 24th Biennial Symposium on Communications, pp. 235-238, 2009.
- A. H. David and H. N. Nagaraja, “Order Statistics,” 3rd Edition, Wiley, New Jersey, 2003.
- L. Wasserman, All of Nonparametric Statistics, Springer, New York, 1st Edition, 2005.
- Y. Blanco, S. Zazo, and J. C. Principe, “Alternative statistical Gaussianity measure using the cumulative density function”, Proceedings of the International Workshop on ICA and Blind Signal Separation, pp. 537-542, 2000.