Noise Invalidation Denoising

Noise Invalidation Denoising

Soosan Beheshti,  Masoud Hashemi,  and Xiao-Ping Zhang,  *S. Beheshti, corresponding author, (, phone: 416 979 5000 ex:4906), Masoud Hashemi (, Xiao-Ping Zhang ( are with the Department of Electrical and Computer Engineering, Ryerson University, Toronto, ON, M5B 2K3, Canada. Nima Nivand( is with the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L 3G1, Canada.

A denoising technique based on noise invalidation is proposed. The adaptive approach derives a noise signature from the noise order statistics and utilizes the signature to denoise the data. The novelty of this approach is in presenting a general-purpose denoising in the sense that it does not need to employ any particular assumption on the structure of the noise-free signal, such as data smoothness or sparsity of the coefficients. An advantage of the method is in denoising the corrupted data in any complete basis transformation (orthogonal or non-orthogonal). Experimental results show that the proposed method, called Noise Invalidation Denoising (NIDe), outperforms existing denoising approaches in terms of Mean Square Error (MSE).


Thresholding, order statistics, confidence region.

I Introduction

Data denoising approaches are well known methods with presence in diverse applications and have been studied and developed in research areas ranging from communications to biomedical signal analysis. In denoising techniques, multi-resolution representations of the data is generally used. For example, wavelet shrinkage is based on rejecting those wavelet coefficients that are smaller than a certain value and keeping the remaining coefficients. Thus, the problem of removing noise from a set of observed data is transformed into finding a proper threshold for the data coefficients. The pioneer shrinkage methods, such as VisuShrink and SureShrink, propose thresholds that are functions of the noise variance and the data length [1, 2]. Over the past fifteen years, several thresholding approaches such as [3, 4, 5, 6, 7] have been developed. These methods provide optimum thresholds by focusing on certain properties of the noise-free signal, and they are proposed for particular applications, mostly for the purpose of image denoising. Unlike these approaches, the method presented in this paper focuses only on the properties of the additive noise. By relying on the noise statistics, the method defines a probabilistic region of confidence for the noise coefficients. Consequently, it validates those observed coefficients that are out of the noise confidence region and contain noiseless dominant parts. In the presence of additive colored noise, the required data preprocessing and/or whitening filters may damage the desired noiseless data. Dealing with the colored noise without whitening is especially critical if the noise correlation is due to the non-orthogonality of the basis. The proposed method denoises such data by using only the colored noise statistics in the invalidation step. The principle behind the proposed approach is simple yet powerful as it takes advantage of the properties for the additive noise and demonstrates efficiency in applications for general-purpose denoising.

The paper is organized as follows. In Section II the considered problem is formulated and revisits the philosophy of thresholding. Section III derives a noise signature from the noise statistics. Section IV presents the proposed noise invalidation approach. In Section V simulation results are provided, and conclusions are drawn in Section VI.

Notations: We use capital letters , , and for random variables. Samples of these variables are shown with small letters , , and .

Ii Problem Formulation and Motivation

Noise-free data vector of length , is corrupted by an additive white Gaussian random process with zero mean and variance of . The observed data is


and can be expressed in terms of a desired orthonormal basis such that 111Inner product of real vectors and is denoted by .:


where is an element of orthonormal basis and the desired unavailable coefficient is


Therefore, the following holds


and thanks to the orthonormality of the selected basis S, the noise coefficients,


in the available coefficients


are also independent identically distributed random variables with the same mean and variance of the noise


The observed coefficients are soft thresholded by threshold


The estimate of the noise-free data with this threshold is


We provide the optimum threshold by a noise invalidation process.

Ii-a Main Idea of Thresholding

Traditionally, the main idea in thresholding is to choose a value for which all the coefficients with absolute values smaller than that value are thrown away. The rejected coefficients are the ones that we cannot fully determine their contribution to the noise-free data and therefore, we decide to dismiss them. Nevertheless, there is always a chance that some samples of the pure noise are above the threshold and some noise-free coefficients get buried among the discarded coefficients due to the additive noise. The existing thresholding methods usually focus on a class of signals, such as images, to provide a proper threshold and evaluate the quality of the thresholds based on some performance criterion such as the mean square error (MSE). Among the thresholding methods, the ones with a more general assumption on the noise-free data are VisuShrink and SureShrink, which rely on some form of piecewise smoothness of the wavelet coefficients. The rest of the thresholding methods are application oriented, for example the BayesShrink or other image denoising methods that use properties such as generalized Gaussian distribution (GGD) for the noise-free image itself. Another example is the adaptive denoising approach in [4] for neural network applications.

Here, we present a general-purpose thresholding approach that does not need to exploit any particular property of the noise-free data itself. Consequently, the thresholding approach should be invariant with respect to the order of the data. So if the data is reordered and put in an ascending order based on its absolute value, the optimum threshold should remain the same. In this case, choosing the th coefficient of the ordered data as the threshold is equivalent to throwing away the first coefficients.

For a general-purpose threshold, instead of concentrating on the properties of the remaining coefficients after thresholding, it is logical to focus on the dismissed coefficients. These coefficients are discarded because they are attributed to the noise or very noisy coefficients. It is rational to equivalently state that these coefficients are discarded since they behave similarly to a set of coefficients that can be generated by an associated Gaussian distribution of the additive noise. In the following section we present one of the signatures of a set that is generated by this Gaussian distribution 222Probabilistic approach for finding a significant noise-free component in a coefficients is discussed in a data denoising approach in [8]. However, in this approach a Laplacian prior for a noise-free data is assumed.
Preliminary work related to the proposed method is presented in [9]

Additive Noise Variance: Thresholding methods, such as SureShrink and VisuShrink, rely heavily on the value of the additive noise variance . In most practical applications this value is not known. The estimate of the standard deviation is usually provided by MAD approach where , and where MAD is the median of absolute value of normalized fine scale wavelet coefficients [2]. The method provided in this paper also requires the estimate of noise variance, and we use the MAD method for this estimation.

Iii Additive Noise Signature

Consider the additive noise random variable in (7) with zero mean and finite variance. Define the signature function for any value and as such that the mean and variance of this function over are finite values:


The signature for samples of a random process of length , , with IID members that have the same distribution of is defined as


It follows that the expected value and variance of the signature are


Detailed are shown in Appendix A. For a large data length, while the mean is a finite fixed value, the variance becomes smaller. The use of such signatures in invalidation of the additive noise is explored with the following example.

Iii-a Signature Example: Absolute Noise Sorting (ANS)

Consider a noise signature with the following form


In this case, for the signature of the IID random process in (12) we have [10, 11, 12]:

where is the cdf of absolute value of the additive noise


Details are provided in Appendix B.

For each , the value of where is the number of samples of with absolute values smaller than . Equivalently, when sorting , the th value is the largest that is smaller than . Therefore, when sorting the s, the index is . Figure 1 illustrates the effect of sorting and the role of the small variance in providing a noise signature. The figure shows the behavior of 100 samples of Gaussian noise with unit variance and length 2048. As the top figure shows, with a very high probability, the values of this data are bounded between .

Fig. 1: Top figure: 100 runs of a zero mean Gaussian distribution with unit variance and length 2048. Middle: The same 100 runs of the above figure sorted based on their absolute values. Bottom: This is the middle figure with its vertical and horizontal axes swapped ().

However, if we sort the same data based on its absolute value in the middle figure, the values collapse in a much denser area. Such behavior can be explained by the ANS signature as follows. The bottom figure show the result of swapping the horizontal and vertical axes of the middle figure. Here the horizontal axis is and the vertical shows 100 samples of where . As it is expected, these values are around mean with variance . This will allow us to define proper confidence regions, with a high probability , around the noise signature. Due to the noise signature structure, these regions are considerably smaller than the corresponding confidence regions of the Gaussian distribution of the additive noise itself.

Fig. 2: Solid line: Mean of the noise . Dashed lines are upper and lower bounds with confidence probability 0.999997.

Therefore, for each and for a high confidence probability , we can find and around the mean value of such that


For example, Figure 2 shows the bounds on for confidence probability p=0.999997 and with .

Iii-B Confidence Region and Gaussian Estimate

While it is straightforward to make a table of values of the boundaries shown in Figure 2, it is possible to use Gaussian estimates for the distributions of for large enough values of as in (12) is average of independent variables. Using the Central Limit Theorem for this distribution, we have 333The error function is (18)


This estimates the boundaries in (17) to be


The choice of should be such that the probability is close to one and at the same time the boundary is not very loose. In statistics the three-sigma rule, or empirical rule, states that for a normal distribution, almost all values lie within three standard deviations of the mean444The following is known for and (22) . For a better quality measure, the six sigma approach increases the standard deviation to 4.5 (equivalently ). Consequently, we suggest choosing such that . Interestingly, our experimental observation shows that the threshold associated with provides the optimum threshold with respect to MSE in 90 of cases.

Iv Noise invalidation with Absolute Coefficient Sorting (ACS)

The coefficients of our observed data is in form of which has the same structure as the noise except its mean which is the noiseless coefficient . In this case




Details are provided in Appendix C.

Figures 3 and 4 show typical behaviors of for various s and change of noise variance.

Fig. 3: Expected value of for various values of when the additive noise variance is .

Fig. 4: Expected value of when and the noise standard deviations and .

Sorting the coefficients in this case is analogous to calculation of


which according to (23) and (24) has the following mean and variance


Since the value of in (25) is bounded between zero and one, the variance of this value is much less than its mean for large values of . Therefore, a dense area will cover the sorted data with a high probability. This area becomes distinguished from the area covered by the sorted noise-only signal as the value of grows and as the nonzero coefficients become effective. This performance is illustrated in Figure 5 which shows the area covered by the sorted noisy data for Blocks signal for when SNR=5. The figure also shows the behavior of the sorted noise-only data. As it can be seen with probability there is no overlap between the sorted noise and sorted noisy data after a certain value of .

Fig. 5: The area between the solid lines is the confidence region of sorted absolute values of the noisy data coefficients of Blocks signal (SNR=5) with probability 0.999997. The area between the dashed lines is the noise confidence region with probability 0.999997.

Iv-a Noise Invalidation in Application

Using the noise sorting signature, it is possible to invalidate the noisy coefficients with a high confidence. Figure 6 shows the application of the method. The confidence region for the noise-only data is available upon knowing or estimating the noise variance. As the sorted absolute noisy data leaves the noise confidence region, it assures that the coefficients are becoming more effective than the noisy part of the data. The largest value for which the departure occurs is the optimum threshold () for the noise validation problem


Fig. 6: Solid line is the sorted absolute values of the observed data coefficients crossing upper bound of the noise confidence region at when the observed data is noisy Blocks signal (SNR=5). The area between the dashed lines is the noise confidence region with probability 0.999997.

Iv-B Colored Noise and Thresholding

Corollary: If the additive noise is colored, the expected value of and will remain the same as these expected values for the white noise in (III-A) and (27). For the variance of the sorted noise we have




with .

Proof: In Appendix D.

The variance for the sorted noisy data is also provided in Appendix D. As the variance indicates, the wider is the autocorrelation of the noise process with itself, the wider is the signature region of the noise and noisy data and therefore, as it is expected, it may become more difficult to distinguish the data from the noise.

V Simulation Results

We perform our denoising methods on noisy versions of six standard signals, Blocks, Mishmash, Bumps, and Quadchirp which are the test signals introduced in [1]. Five level decomposition with Haar wavelet is chosen for this experiment. The confidence probability of the methods for noise invalidation region is . Figure 7 shows the six signals and their coefficients. As this figure confirms, the test signals represent a wide range of possible coefficient structures. For example Figure 8 shows the coefficient distribution of some of these signals. while signals such as Blocks have very few nonzero coefficients and many coefficients close to zero, signals such as mishmash have more uniformly distributed coefficients. Blocks and MishMash signals represent two extreme structures, while QuadChirp have a combined structure of both of these signals.

Fig. 7: From top to bottom: Blocks, Bumps, HeavySin, Doppler, QuadChirp, and MishMash. Left figures are the signals and right figures are their corresponding wavelet coefficients.

Fig. 8: Coefficient distribution for Blocks, Doppler, Quadchirp and Mishmash.

We compare the proposed method with Visushrink, Sureshrink which are more general-purpose thresholding approaches. On the other hand, Sure-LET and BayeshShrink are image denoising methods that are performing well with the one dimensional signals. We also consider these method for comparison. We compare the performance of the methods based on their normalized reconstruction mean square error (MSE) which is


where (9) is the resulted denoised data and is the noise-free data. Table I provides the MSE of the compared methods. As the table shows, NIDe performs better than the other approaches in most of the cases.

The methods are compared for a range of SNRs both for white or colored noise. Autocorrelation of the considered colored noise is shown in Figure 9. The results of average of 100 runs are provided in Table I. It is important to note that the MSEs for all the methods have small standard deviation, much smaller than the MSE itself. As the table shows, three methods NIDe, BayesShrink and Sure-Let are comparable in presence of additive white noise. It is worth mentioning that NIDe outperforms the other two methods for the more sparse signals of times for even a wider range of SNR than the range shown in the table. For the non-sparse ones such as Quad-Chirp and Mishmash, however, Sure-let performs slightly better than the other two methods with additive white noise. For the additive colored noise, NIDe is consistently outperforming the other methods.

Fig. 9: Autocorrelation of the colored additive noise.

Figures 10 show the denoised versions of Blocks with these methods for white and colored additive noise. As the figures show and Table I confirms, the denoised data with NIDe and BayesShrink in presence of a white noise are comparable while the other methods have larger MSE. In presence of colored noise, however, NIDe method outperforms the other methods.

Shrink -LET
SNR=1 0.168 0.657 0.111 0.124 0.066
SNR=4 0.124 0.259 0.058 0.065 0.042
SNR=8 0.086 0.032 0.026 0.027 0.020
SNR=10 0.072 0.015 0.015 0.017 0.013
SNR=14 0.048 0.007 0.007 0.007 0.005
SNR=1 0.155 0.427 0.098 0.122 0.091
SNR=4 0.127 0.065 0.073 0.063 0.070
SNR=8 0.096 0.026 0.023 0.027 0.025
SNR=10 0.083 0.023 0.019 0.018 0.017
SNR=14 0.062 0.017 0.012 0.009 0.009
SNR=1 0.137 0.670 0.029 0.115 0.028
SNR=4 0.096 0.268 0.017 0.057 0.017
SNR=8 0.063 0.032 0.010 0.023 0.009
SNR=10 0.050 0.007 0.016 0.015 0.007
SNR=14 0.035 0.004 0.003 0.006 0.003
SNR=1 0.798 0.098 0.069 0.126 0.078
SNR=4 0.659 0.085 0.085 0.067 0.076
SNR=8 0.493 0.078 0.036 0.030 0.032
SNR=10 0.424 0.076 0.029 0.020 0.024
SNR=14 0.308 0.071 0.012 0.010 0.009
SNR=1 0.931 0.782 0.466 0.447 0.637
SNR=4 0.903 0.771 0.284 0.276 0.357
SNR=8 0.864 0.757 0.129 0.131 0.151
SNR=10 0.841 0.753 0.086 0.086 0.096
SNR=14 0.780 0.750 0.038 0.037 0.035
SNR=1 0.926 0.523 0.517 0.462 0.693
SNR=4 0.906 0.430 0.318 0.286 0.374
SNR=8 0.865 0.432 0.146 0.136 0.156
SNR=10 0.836 0.623 0.095 0.089 0.099
SNR=14 0.752 0.641 0.040 0.039 0.039
Shrink -LET
SNR=1 0.406 0.695 0.562 6.212 0.385
SNR=4 0.212 0.334 0.286 3.107 0.210
SNR=8 0.098 0.124 0.121 1.214 0.092
SNR=10 0.063 0.074 0.079 0.753 0.060
SNR=14 0.030 0.035 0.037 0.295 0.024
SNR=1 0.455 0.648 0.559 6.443 0.427
SNR=4 0.279 0.305 0.293 3.220 0.245
SNR=8 0.105 0.115 0.125 1.273 0.098
SNR=10 0.072 0.075 0.081 0.773 0.072
SNR=14 0.035 0.038 0.035 0.299 0.030
SNR=1 0.402 0.712 0.531 6.527 0.334
SNR=4 0.209 0.328 0.270 3.256 0.173
SNR=8 0.085 0.117 0.105 1.280 0.071
SNR=10 0.057 0.069 0.068 0.803 0.047
SNR=14 0.027 0.023 0.028 0.317 0.020
SNR=1 0.446 0.399 0.569 6.431 0.401
SNR=4 0.330 0.236 0.293 3.201 0.235
SNR=8 0.231 0.138 0.127 1.207 0.110
SNR=10 0.187 0.113 0.082 0.76 0.075
SNR=14 0.133 0.089 0.037 0.038 0.030
SNR=1 0.964 1.183 1.139 1.298 0.594
SNR=4 0.978 0.965 0.489 0.536 0.326
SNR=8 0.986 0.833 0.164 0.186 0.137
SNR=10 0.990 0.801 0.100 0.117 0.088
SNR=14 0.980 0.771 0.039 0.051 0.037
SNR=1 0.974 1.186 1.205 1.246 0.607
SNR=4 0.956 0.889 0.523 0.523 0.336
SNR=8 0.961 0.727 0.174 0.180 0.142
SNR=10 0.963 0.699 0.105 0.109 0.091
SNR=14 0.963 0.675 0.042 0.041 0.031
TABLE I: Normalized Reconstruction MSE for the Thresholding Methods. Right Table is for the white additive noise and left table is for the colored additive noise with autocorrelation in Figure 9. Averaged over 100 runs
Fig. 10: Left figure is for additive white noise with SNR=4 and right figure is for additive colored noise with SNR=14: (a) Noisy Blocks, (b) VISU, (c) SURE, (d) BayesShrink, (e) Sure-Let, (f) NIDE

Vi Conclusion

A denoising approach based on direct invalidation of the coefficients is proposed. The invalidation process uses a signature of the additive noise in the form of a probabilistic confidence region. The signature is defined based on the statistical properties of the additive noise and is such that its standard deviation is much smaller than its mean. In this work we provided one example of such signature which illustrates itself in simply sorting the coefficients. It was shown that such a signature represents the noise in a dense area. The density of the area depends on the noise variance and the data length. The smaller is the noise variance and/or the longer is the data length, the denser is the signature area. This will enable us to invalidate whether every coefficient is noise dominant or data dominant. The theory of the method shows its strength for any type of noise-free signal. The variance of the signature decreases as the data length grows, providing more distinguish and denser area. The method denoises in presence of not only an additive white noise, but also an additive colored noise This will also enable the user to use the proposed thresholding technique with any non-orthogonal basis. Simulation results confirmed the advantages of the proposed noise invalidation approaches in terms of the reconstruction MSE and illustrates its robust performance. While the proposed NIDe approach is a pioneer method for a general-purpose thresholding, there seems to be a great potential in further analysis, study, and expansion of such invalidation method for the purpose of denoising. For example, it is worth investigating further functions of different noise distributions that can serve as the noise signature. In addition, for the cases that the noise-free signal belongs to a particular class of signals, a potential extension would be to combine the statistical properties of the noise-free signal with the statistical structure of the additive noise in order to define noise signatures for the purpose of probabilistic invalidation.

Appendix A Mean and Variance of the Signature for IID noise

For the expected value of this signature from (10) we have


For the variance of the signature, since and are independent we have


Therefore, the cross terms in the desired variance are zero and from (11) we have


Appendix B Mean and Variance of the Sorting Signature for IID Noise

For the mean we have




For the variance


From (37) and (39)


On the other hand


Therefore, we have


which sets the cross terms in the variance of to zero. From (40), (42) we have


Appendix C Mean and Variance of the Data with Sorting Signature


since , we have


which results to be the defined in (25). For the variance, due to the structure of , similar to what we had for the noise, we have




Which results in the variance in (28).

Appendix D Mean and Variance of the Signature for Colored Noise

In this case the autocorrelation between the zero mean Gaussian s is denoted by .


For the expected value of this function we have


which is similar to that of the IID additive noise. However, for the variance, since the following holds

we have


where elements of the second term are


where the second term is simply , the first term is . For the first term, the joint distribution of and is a Gaussian distribution with zero mean and variance




with . The decomposition of the covariance matrix is as follows


Therefore, by the following transformation


the two and random variables are independent and with variances and . As a result, for the first term of the covariance


Figure 11 show the area considered for the calculation of this probability.

Fig. 11: The desired area for calculation of the probabilities in (56) and (58).

D-a Noisy Data

With similar analogy, for the signal in presence of the noisy data, we have


where was defined in (25). For the variance of the sorted absolute value of the noisy data, similarly (46) holds. Therefore, structure of the variance is similar to (50):


For the covariance in the second term


we use (59) to calculate and have



  • [1] D. L. Donoho and I. M. Johnstone, “Ideal spatial adaption via wavelet shrinkage,” Biometrika, vol. 81, pp. 425-455, 1994.
  • [2] D. L. Donoho and I. M. Johnstone, “Adapting to unknown smoothness via wavelet shrinkage,” Journal of the American Statistical Assos. vol 90, pp.1200-1224, 1995
  • [3] T. Blu and F. Luisier, “The SURE-LET Approach to Image Denoising,” IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2778-2786, November 2007.
  • [4] X-P. Zhang, “Thresholding neural network for adaptive noise reduction,” IEEE Transactions on Neural Networks, vol. 12, no. 3, pp. 567-584, May 2001.
  • [5] S.G. Chang, B. Yu , and M. Vetterli, “Adaptive wavelet thresholding for image denoising and compression,” IEEE Transactions on Image Processing, vol. 9, no. 9, pp. 1532-1546, Sept. 2000.
  • [6] S. G. Chang, B. Yu, and M. Vetterli, “Spatially adaptive wavelet thresholding with context modeling for image denoising,” IEEE Transactions on Image Processing, vol. 9, no. 9, pp. 1522-1531, Sept. 2000.
  • [7] F. Abramovitch, T. Sapatinas, and B.W. Silverman, “Wavelet thresholding via a Bayesian approach,” J. Roy. Statist. Soc., ser. B, vol. 60, no. 4, pp. 725-749, 1998.
  • [8] A. Pizurica and W. Philips, “Estimating the probability of the presence of a signal of interest in multiresolution single- and multiband image denoising”, IEEE Transactions on Image Processing, vol. 15, no. 3, pp. 654-665 , March 2006.
  • [9] S. Beheshti, N. Nikvand, and X.N. Fernando, “Soft thresholding by noise invalidation,” Proceedings of the 24th Biennial Symposium on Communications, pp. 235-238, 2009.
  • [10] A. H. David and H. N. Nagaraja, “Order Statistics,” 3rd Edition, Wiley, New Jersey, 2003.
  • [11] L. Wasserman, All of Nonparametric Statistics, Springer, New York, 1st Edition, 2005.
  • [12] Y. Blanco, S. Zazo, and J. C. Principe, “Alternative statistical Gaussianity measure using the cumulative density function”, Proceedings of the International Workshop on ICA and Blind Signal Separation, pp. 537-542, 2000.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description