Data driven sampling of oscillating signals

Data driven sampling of oscillating signals

Abstract

The reduction of the number of samples is a key issue in signal processing for mobile applications. We investigate the link between the smoothness properties of a signal and the number of samples that can be obtained through a level crossing sampling procedure. The algorithm is analyzed and an upper bound of the number of samples is obtained in the worst case. The theoretical results are illustrated with applications to fractional Brownian motions and the Weierstrass function.
Key words and phrases: Level crossing sampling, oscillations, monoHölderian functions.
2000 AMS Mathematics Subject Classification — 26A16, 60G15, 60G18, 94A12.

1 Introduction

Autonomy, size and weight are very important issues in the design of mobile systems. One possibility to reduce the power consumption of signal processing systems is the reduction of the number of samples. Non uniform sampling is a way to have few samples for a large class of signals, especially sporadic signals, while still describing correctly the active parts of the signal. This leads to a smaller number of samples compared to Nyquist sampling [10, 15, 16, 19]. Specific system architectures, such as event-driven architectures, allow the implementation of this specific sampling. These architectures take samples each time some specific event occurs, e.g. specific voltage levels are crossed. Simple, low power, analog circuits can be designed to acquire information, possibly at high speed.

In this paper we want to relate the signal regularity to the number of non uniform samples obtained via a level crossing technique. Indeed, intuitively, the more the signal is oscillating the more often the signal is sampled. This is of course a local property: the number of samples at the neighborhood of some point may then be related to the local smoothness of the signal, or more precisely to its Hölder regularity. This relationship will be tested on signals whose smoothness properties are perfectly known at each point. This can be useful to predict the processing complexity of e.g. biological signals such as EEG signals or fMRI data which are well-known to be both highly irregular and non stationary.

We introduce here an algorithm which is slightly different from the usual level crossing technique. In [1] the amplitudes are selected thanks a -bit asynchronous analog-to-digital converter (AADC) that corresponds to predefined levels in the voltage range. Level crossing sampling consists in taking a sample each time the predefined levels are crossed. Each amplitude has to be associated to a time. More precisely we store a delay elapsed since the last sample was taken, the local clock that enables this is then reset to zero and ready to measure the next delay. In Figure 1 we display the case when the captured time is that of the next clock tick. The sample is displayed with disks, and the value of the signal at the capture times with circles (Figure 1, left). This leads to a few number of samples, especially for sporadic signals. This procedure is refined decimating the samples by keeping only the last one when a level has been crossed many times successively (Figure 1, right).

     
Figure 1: Non uniform sampling with an AADC [1]. Left: the disks correspond to the samples and the circle to the values at the clock ticks. Right: the disks are the only samples that are kept after decimation.

Our goal here is not to study the approximation of the signal but the number of non uniform samples, given the regularity of the signal, the clock precision and the level quantum. We introduce another sampling algorithm which is slightly different from the AADC, but easier to analyze mathematically, and which yields essentially the same number of samples. In Section 2, we describe this sampling algorithm and rephrase it in mathematical terms. In Section 3 we define the functions that we will use in the numerical experiments of Section 4. These functions are chosen because we are able to control exactly their Hölder regularity.

2 Algorithm and mathematical interpretation

2.1 Step 1: Generation of an oversampled signal

Even in event-driven systems, where the signal is not sampled at each clock tick, there are clocks that measure time, and specifically the time elapsed since the last event. These clocks have a certain precision, and all measured times are multiples of some basis time . Up to some re-scaling of time we suppose that , for some . We never have a complete knowledge of the original signal , but only its samples , for all (Figure 2).

Figure 2: Regular sampling of the input signal (Step 1).

Let be the space of continuous functions, which are linear on intervals

The Faber–Schauder hierarchical basis, defined in [7], yields a natural basis of . Let . The functions , for all , form the Faber–Schauder basis. We can uniquely define the linear interpolation of at scale by imposing , for all , and

(1)

In the sequel we suppose that is compactly supported in and therefore in Equation (1).

2.2 Step 2: Level crossing

We consider that levels are uniformly spaced by some quantum . In applications where the range of the signal is , the sample can then be stored with a -bit register. The second step consists in approximating the samples by the nearest level below (Figure 3).

Figure 3: Reduction to predefined levels (Step 2). Samples from Step 1 are disks and the new samples are the circles.

We denote the integer part of , namely . The function which coincides with the new samples is uniquely defined by

2.3 Step 3: Decimation

Next, we decimate the samples so as to keep only one sample when consecutive samples have the same amplitude. We choose to keep the last sample to be compatible with the causality principle (Figure 4).

Figure 4: Non uniform samples after decimation (Step 3).

We are interested in the number of samples after the three steps. Comparing Figures 1 (right) and 4, we notice that the number of final samples (the disks in both figures) are comparable. This is a generic situation. In fact the differences are mainly due to the extreme upper and lower levels.

From the mathematical point of view, decimation consists in keeping a subsequence of , defined by induction: and

We only store the couples where , is the delay since the last sample, and is the amplitude of the sample. Step 3 leads to a reduction of the number of samples, but does not introduce any approximation.

3 Application to monoHölderian functions

Our goal is to relate the number of non uniform samples to the regularity of the signal. In particular, we address the Hölderian regularity.

3.1 MonoHölderian functions

Before introducing the Hölderian regularity, we first recall a few definitions. They allow a weaker definition of pointwise smoothness. The final goal is to define strongly monoHölderian functions, a notion that formalizes the idea of a function which has the as uniformly as possible regularity.

Let and , for all we define

where is the ball of center and radius , and denote, as usual, the oscillation of a function at on the ball as

Definition 1

Let be a locally bounded function, let and . The function is Hölderian of exponent at () if there exist and such that

(2)

A function is uniformly Hölderian of exponent () if and in Equation (2) are uniform in .

The irregularity of a function can be studied through the notion of anti-Hölderianity.

Definition 2

Let be a locally bounded function, let and . The function is anti-Hölderian of exponent at () if there exist and such that

(3)

Let us notice that the statement (3) is stronger than just negating the Hölderian regularity. Indeed such a negation only yields the existence, for any , of a subsequence (depending on ) for which

Strongly monoHölderian functions naturally arise in the study of the regularity of mappings such as Weierstrass-type or random processes (see e.g. [9, 12]). Indeed, many results only hold for such mappings.

Definition 3

Let . A function is strongly monoHölderian of exponent () if , i.e. if there exists and such that, for any ,

(4)

3.2 Approximation properties

As already mentioned, only Steps 1 and 2 lead to approximations. To state our approximation results, we need some preliminary definitions. According to our application, we now restrict to functions defined on . For any continuous function on , we define its uniform regularity modulus by

The function is a modulus of continuity in the sense that and that there exists some such that (see [13]).

In what follows we need the notion of strong modulus of continuity introduced in [3, 13]. The modulus of continuity is said to be strong if there exists such that for any positive integer one has

It is well-known [13] that if there exists some strong modulus of continuity such that

then

where is defined by Equation (1). In particular if there exists some such that for any

with , and then there exists a constant (which depends on but not on the scale ) such that

Assume now that in addition

then, following [3, 5], there exists a constant and (which depend on but not on the scale ) such that

In particular, applying these results with the strong modulus of continuity , we deduce that if the function is assumed to be uniformly monoHölderian with exponent there exist some and such that

This yields estimates on the error due to Step 1. The approximation made at Step 2 clearly does not depend on the regularity of function , and we have

3.3 Theoretical number of samples in the case of a monotonous function

If is a monoHölderian function with exponent , by definition there exists and for any scale and

If the function is additionally supposed to be monotonous, we further have exactly

Hence

A monoHölderian signal crosses equi-spaced levels with quantum at most times. The worst case is that of monotonous signals.

Besides, initial sampling (Step 1) takes exactly samples. This is hence the first natural upper bound for the number of samples. Together with monoHölderianity we know that the number of samples is less than the minimum of these two bounds. For large values of (or small values of ), we indeed keep almost all of the samples. Otherwise we can expect some reduction of the number of samples. For , the threshold is . Observe that the proof is based on the fact, that in the monotonous case, we can estimate in a very simple way the oscillations

of the function. Of course in the general case, the situation can be much more complicated. Nevertheless, generic results in the sense of prevalence as stated in [4] are expected to hold. In what follows, we illustrate through numerical simulations what happens in two cases.

4 Numerical simulations

4.1 Fractional Brownian motion and the Weierstrass function

We test level crossing on two toy models: sample paths of fractional Brownian motion and the Weierstrass function , which are indexed by the Hurst index . The choice of these two cases is guided by the fact that their smoothness properties are related to the Hurst index.

The fractional Brownian motion (fBm) is the unique Gaussian -self-similar process with stationary increments. It is defined from its covariance function

for all . The classical Brownian motion corresponds to . The sample paths of fBm are well-known to be almost surely continuous. Further, its Hurst index is directly related to the roughness of its sample paths. More precisely the classical law of the iterated logarithm ensures that

Roughly speaking, a.s. for all ,

Figure 5 presents three realizations of sample paths of fractional Brownian motions for , , , and ( samples).

Figure 5: Three realizations of fractional Brownian motions for , , and (from left to right).

The Weierstrass function is a classical example of monoHölderian function with exponent as proved in [11]. It is defined as

and, for all ,

Figure 6 presents the graphs of the Weierstrass functions for , , , and ( samples).

Figure 6: The Weierstrass function for , , and (from left to right).

4.2 Tests

The tests are performed within the SPASS Matlab toolbox [2] (Signal Processing for ASynchronous Systems toolbox). It has been originally designed to treat non uniform signals produced by asynchronous systems, but can be used for a large variety of signals. To generate fractional Brownian motions, we make use of the genFBMJFC.m function [6].

We use two values of (10 and 13) and two values of (4 and 5). These small values of are sufficient for most mobile applications. Our output is the number of samples after decimation (Step 3). For the fractional Brownian motion, we perform 1000 realizations and average the number of samples obtained for each realization to obtain an average number . We perform the same tests on the Weierstrass function (deterministic function, only one realization).

We perform this for values of the Hurst number in the range and obtain the plots (in semi-log scale, with log-basis 2) in Figures 7 and 8 for and 13 respectively. We also plot the number of samples computed in the worst case (monotonous function i.e. maximum total variation) for : .

Figure 7: Number of samples in terms of the Hurst number in the log scale for , and (left) and (right). Solid lines correspond to the averaged number for the fractional Brownian motion, the dashed lines to the Weierstrass function, and the dotted lines to the worst case .
Figure 8: Number of samples in terms of the Hurst number in the log scale for , and (left) and (right). Same plotting conventions as Figure 7.

We distinguish two regimes: below some value of the Hurst number the algorithm more or less keeps all the original samples, above this value the decimation is efficient and yields a significant reduction of the number of samples. For the different curves these ”critical” values of are given in Table 1.

0.4 0.5
Table 1: ”Critical” values of the Hurst number.

The plots associated to fBm are much more regular than those associated to the Weierstrass function because they are obtained by an averaging procedure. Besides, for the Weierstrass function, the constant involved in Equation (4) is a priori not equal to and indeed depends on . It can be explained using the range of the fractional derivative of order of Weierstrass function which is not reduced to a constant and depends on ([14, 18, 21] for more details).

5 Conclusion

We have predicted for monoHölderian functions and shown numerically that there is a strong relationship between the smoothness properties of a signal and the number of samples that can be obtained by the crossing level algorithm presented in this paper. This is rigorously proved in the case of monotonous monoHölderian functions. The next step, which will be the purpose of a forthcoming paper, will then be to consider signals whose regularity may change from point to point such as multifractional or multifractal signals.

ACKNOWLEDGEMENT

LJK is partner of the LabEx PERSYVAL-Lab (ANR–11-LABX-0025-01) funded by the French program Investissement d’avenir and this work has been partially supported by the MathSTIC Project OASIS of the Grenoble University. The authors wish to thank Jean-François Coeurjolly for fruitful discussions.

References

  1. E. Allier, G. Sicard, L. Fesquet, and M. Renaudin, Asynchronous level crossing analog to digital converters, Measurement, 37(4), 296–309, 2005. Special Issue on ADC Modelling and Testing, edited by P. Carbone and P. Daponte.
  2. B. Bidégaray-Fesquet and L. Fesquet, Signal Processing for ASynchronous Systems toolbox, Matlab toolbox, 2011.
  3. M. Clausel, Quelques notions d’irréégularité uniforme et ponctuelle. Le point de vue ondelettes, PhD Thesis, Université Paris–Est Créteil, 2007.
  4. M. Clausel and S. Nicolay, Some prevalent results about strongly monoHölder functions, Nonlinearity, 23(9), 2101–2116, 2010.
  5. M. Clausel and S. Nicolay, A wavelet characterization for the upper global Hölder index, Journal of Fourier Analysis and Applications, 18(4), 750–769, 2012.
  6. J.-F. Coeurjolly, Simulation and identification of the fractional Brownian motion: a bibliographical and comparative study, Journal of Statistical Software, 5(7), 1–53, 2000.
  7. A. Cohen, Wavelet Methods in Numerical analysis, Handbook of Numerical Analysis, Vol. VII, Elsevier, 2000.
  8. R.A. DeVore, B. Jawerth, and B.J. Lucier, Image Compression Through Wavelet Transform Coding, IEEE Transactions on Information Theory, 38(2), 719–746, 1992.
  9. D. Geman and J. Horowitz, Occupation densities, Annals of Probability, 8, 1–67, 1980.
  10. K. Guan and A.C. Singer, Opportunistic sampling by level-crossing, In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2007), pages 1513–1516, Honolulu, Hawai’i, USA, April 2007. IEEE, 2007.
  11. G.H. Hardy, Weierstrass’s non differentiable function, Transactions of the American Mathematical Society, 17, 301–325, 1916.
  12. Y. Heurteaux, Weierstrass function with random phases, Transactions of the American Mathematical Society, 355(8), 3065–3077, 2003.
  13. S. Jaffard and Y. Meyer, Wavelets Methods for Pointwise Regularity and Local Oscillations of Functions, Memoirs of the American Mathematical Society, 123(587), 1996.
  14. Y. Kui, S. Weiyi, and Z. Songping, On the fractional calculus of fractional functions, Applied Mathematics - A Journal of Chinese Universities Series B, 17(4), 377–381, 2002.
  15. J.W. Mark and T.D. Todd, A nonuniform sampling approach to data compression, IEEE Transactions on Communications, 29(1), 24–32, 1981.
  16. F.A. Marvasti, Nonuniform Sampling. Theory and Practice, Information Technology: Transmission, Processing and Storage. Springer, 2001.
  17. Y. Meyer, Ondelettes et opérateurs, Hermann, 1990.
  18. S. G. Samko, A. A. Kilbas, and O. I. Marichev, Fractional integrals and derivatives. Theory and applications, Gordon and Breach, 1993.
  19. N. Sayiner, H.V. Sorensen, and T.R. Viswanathan, A level-crossing sampling scheme for A/D conversion, IEEE Transactions on Circuits and Systems II, 43(4), 335–339, 1996.
  20. C. Tricot, Courbes et dimension fractale, Springer, 1992.
  21. M. Zähle and H. Ziezold, Fractional Derivatives of Weierstrass-Type Functions, Journal of Computational and Applied Mathematics, 76, 262–275, 1996.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
123473
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description