Discrete Scaling Based on Operator Theory
Signal scaling is a fundamental operation of practical importance in which a signal is enlarged or shrunk in the coordinate direction(s). Scaling or magnification is not trivial for signals of a discrete variable since the signal values may not fall onto the discrete coordinate points. One approach is to consider the discretely-spaced values as the samples of a signal of a real variable, find that signal by interpolation, scale it, and then re-sample. However, this approach comes with complications of interpretation. We review a previously proposed alternative and more elegant approach, and then propose a new approach based on hyperdifferential operator theory that we find most satisfactory in terms of obtaining a self-consistent, pure, and elegant definition of discrete scaling that is fully consistent with the theory of the discrete Fourier transform.
Signal scaling is a fundamental operation in which the independent variable of the function is scaled by a real number , resulting in the signal to be compressed or decompressed along the axis in the form . With reference to images, the terms magnification/demagnification or zooming in/out are more commonly used. Scaling or magnification is not trivial for signals of a discrete (integer) variable since, the signal values may not fall onto the discrete coordinate points. Given a function defined on the integers, the value of will be undefined unless is an integer. Nevertheless, discrete scaling is a necessary operation in practice since we often want to scale signals of continuous variables which are represented as functions of discrete variables in digital computers.
A straightforward approach requiring knowledge of no more than elementary signals and systems [8, 6, 7] is to consider the values of as the Nyquist-rate samples of a hypothetical bandlimited signal . Then, we can use standard sinc interpolation to write an expression for in terms of . Now, can be scaled to and then re-sampled to obtain the values of a new signal of a discrete variable, which can be considered the scaled version of . The values of the new scaled signal will be linearly related to the values of the original signal . Of course, scaling to obtain will change its bandwidth, which introduces complications in choosing the re-sampling rate. In any case, if the re-sampling rate is different, this will somewhat complicate interpretation of the scaled signal. If the integer domain is not defined from to , but rather over a finite interval, say from to , and we are working in a circulant domain, it is possible to modify the approach by employing Dirichlet functions  instead of sinc functions as the interpolation functions.
The only more creative approach to discrete scaling we are aware of is due to Pei et al., who developed a method based on “Centered Discrete Dilated Hermite Functions” (CDDHFs) , which is an improvement of their earlier “ matrix” method . The CDDHF-based discrete scaling method works as follows: First, write the signal as a linear superposition of discrete Hermite-Gaussian functions. Then, replace the discrete Hermite-Gaussian functions with their dilated (scaled) versions to obtain the scaled discrete signal . In other words, the expansion coefficients are kept the same while scaling the discrete functions that form the expansion basis. Although this sounds conceptually simple, the difficulty (and ingenuity) lies in the development of the the set of dilated discrete Hermite-Gaussian functions, [12, 4], on which the method rests. This procedure provides a mathematically sound and elegant way of performing discrete signal scaling.
In this work, we present a different approach by utilizing hyperdifferential operator theory [9, 14, 10, 15, 5] to obtain a discrete scaling matrix. The scaled version of the signal is obtained by multiplying the unscaled version by this matrix. We choose to work in a framework that is not only discrete, but also finite. That is, the functions are defined over finite intervals. Our approach employs the basic operations of differentiation and coordinate multiplication. We believe that it provides a self-consistent, pure, and elegant definition of discrete scaling which is also fully compatible with the theory of the discrete Fourier transform and its circulant structure. We also believe that the presented approach of defining a discrete operation in the context of hyperdifferential operator theory can set an example that can be applied to other problems in signal theory and analysis.
For simplicity we work with one-dimensional signals, although our results can easily be generalized to higher-dimensional signals. Scaling is defined as that operation which takes to . The factor is included to make the operation unitary, but this will not be of much importance. The real parameter can be called the scaling or zooming factor or the magnification, depending on context. The signal will be compressed/demagnified or decompressed/magnified depending on whether is lesser or greater than unity. In operator form we will write
where the calligraphic operator on the left-hand side exhibits the parameter . Our convention for the Fourier transform operator will be
We define two further operators, the coordinate multiplication operator and the differentiation operator :
where the is included so that and are precisely Fourier duals (the effect of either in one domain is its dual in the other domain). This duality can be expressed as follows:
Basically, the above equation says that, instead of multiplying a function with , we can instead take its inverse Fourier transform, differentiate it with respect to the frequency variable, divide by , and take its Fourier transform, and we will get the same result.
In this paper we deal with finite-length signals of a discrete (integer) variable. (We could equivalently think of them as being defined on a circulant domain, which would not make a difference to our arguments.) The length of our signal vectors will be denoted by . When is even, they will be defined on the interval of integers , and when is odd, they will be defined on the interval of integers . We will also consider a less-common approach based on the device of using “half integers.” In this approach, the domain is defined as the interval of unit-spaced half integers for even and for odd . Although not very usual, there is nothing unnatural about this way of indexing signals of a discrete variable; it is merely a particular way of bookkeeping. Note that the indices are still spaced by unity, and there is merely a shift by with the purpose of making the interval symmetrical around the origin when is even (with the consequence that symmetry is lost when is odd). A few examples of works considering this way of indexing are [2, 1, 3, 13]. Consistent with this literature, we will refer to the former approach as the ordinary DFT and refer to the latter one, in which we use ”half integers”, as the centered DFT.
It is possible to better understand the choices of indexing by considering them in the context of sampling a signal of a continuous (real) variable. The sample values of a function are usually written as where is the index and is the sampling interval. When we use full integer values of the index, which is the usual case, we get a set of samples that includes a sample at the origin, for . For instance, for , we would be sampling at . However, we may also choose to sample in a manner that does not include the origin, for instance, we may choose our samples as , where are still full integers, in which case we would be sampling at . The use of half integers is an alternative way of bookkeeping where we maintain the samples to be at rather than , but still get the same samples by allowing to take half integer values. While both approaches are equivalent, we find the use of half integers (centered) to be more elegant and unifying.
The sampled signals can be represented by column vectors with rows. The labelling of the rows will follow the same index convention as above. In the case of half integers, we may refer to the “-1.5th row” of the vector, and so forth. The operators acting on them can be represented as matrices that have columns and rows. The matrix representing the Fourier transformation will be the unitary discrete Fourier transform (DFT) matrix , with appropriate shifting/circulation of its rows and columns such that it is consistent with the index ranges we use. The elements of this -point unitary DFT matrix can be written in terms of as follows:
Note that this expression covers both the ordinary and centered case provided we remember that (i) for the ordinary case and run through the integers for even and for odd ; (ii) for the centered case and run through the unit-space half integers for even and for odd . The ability to write what would otherwise be two separate expressions in the familiar form above is the main advantage of the half-integer indexing scheme we employ.
It could be questioned whether it would not be simpler to work with the traditional interval to keep things simple. This would essentially give the same results, only in shifted/circulated form. We choose to work with symmetric intervals to maintain and reveal as much symmetry in the problem as there actually is.
We work with dimensionless coordinates; that is, the unit of is not seconds or meters, it is unitless. Say the function of a continuous variable in seconds or meters has an approximate extent lying over the interval , meaning most of its energy is contained in this interval. Likewise, say its extent in the frequency domain lies over the interval , where is the frequency variable in Hz or inverse meters. Then we can introduce a parameter , such that is a dimensionless number and choose to work with the function instead of . If we choose , then the extent of both and its Fourier transform will lie in the interval . According to the sampling theorem, if a signal is contained within such an interval, it can be sampled with a sampling interval of . Thus there will be samples in all. The quantity is often referred to as the time-bandwidth or space-bandwidth product. Re-expressing in terms of the number of samples , we would be sampling over the interval with a sampling interval of for a total of samples. Should not have an even whole square root, we can always choose and a little larger than necessary to make it so, although this will not be important for our discussion.
To put the whole sampling issue together, let us consider the example of . The interval over which the signal will be sampled will be , which is divided into 16 sampling intervals each of length . The real issue now is whether the samples will be taken on the left (or right) edge of each sampling interval, or in the center (or yet somewhere else) of each sampling interval. Taking them at the left edge is the familiar case; the sample points will be . If we take them at the middle, they will be . There are two ways to bookkeep the latter case. We can continue to work with an integer index , and then the sample points will be . Alternatively, we can maintain that the sample points are still at , but use half integer values of . We find greater clarity and unity in emphasizing the sampling intervals over the sampling points, and working with half integer index values.
3 Pei’s Method
In , Pei et al. consider a finite signal denoted , of length , to be scaled. They let denote the scaled signal, with being the scaling factor. The signal can always be expressed as a linear combination of any linearly independent signals. In their method, special functions called “Centered Discrete Dilated Hermite Functions” (CDDHFs) are utilized as the set of linearly independent signals so any can be expressed as a linear combination of CDDHFs:
where the are the CDDHFs of length , and the are the coefficients. The coefficients are simply the inner products of the with the signal : that is, . The CDDHFs are the centered discrete Hermite functions with no scaling; hence the subscript in the notation. denotes the th CDDHF with a scaling factor of .
Their proposed way to scale the signal is to scale the basis signals , and keep the expansion coefficients the same. More explicitly, the scaled signal is obtained as follows:
where the are scaled versions of the with a scaling factor of . The critical task, of course, is to find the scaled versions of the basis signals. Pei et al. develop a method for constructing the CDDHFs , which we now summarize.
The Hermite-Gaussian functions of a continuous variable, denoted by , are given as follows, :
where denotes the Hermite polynomials. It is well-known that the Hermite-Gaussian functions, , satisfy the differential equation 
The time-scaled version of the Hermite-Gaussian function is . It is possible to find a differential equation for by simply replacing every by in Eq. (9):
Eq. (10) can be rewritten in terms of the coordinate multiplication operator and the differentiation operator :
Rearranging the terms, we get:
Functions satisfying Eq. (12) are the eigenfunctions of .
The next step is to find the discrete counterpart of Eq. (12). This is done by replacing the abstract operators (denoted by calligraphic letters), by boldface matrix operators that act on column vectors in the form . Then, it is possible to compute the CDDHFs as the eigenvectors of this matrix. Here and are matrices that are the finite discrete manifestations of the abstract operators and . So the remaining task before implementing the method is to determine what and should be.
Pei et al. define the matrix as follows:
where is the th row, th column entry of , and . Intuitively, this corresponds to multiplying every entry in a signal by the square of the corresponding index in a centered manner (hence the term). (It will be interesting to contrast this with our development of the matrix later on. We do not take for granted that should be a simple reflection of the form of the continuous manifestation of the operator, and indeed show that for a formulation satisfying complete structural symmetry, it should be chosen differently.)
Once is defined, we have by using the duality relation given in Eq. 5. Being the finite discrete manifestation of the abstract operator , the matrix is the standard centered DFT matrix. Finally, for any scaling factor , we can form , and find its eigenvectors , after which we can easily complete the process. More on the implementation details of this approach can be found in .
4 Hyperdifferential Operator Based Matrix Method
Our approach is based on requiring that all the discrete entities we define observe the same operational properties and relationships as they do in abstract operator form. Therefore, we will require the discrete manifestations of Eq. (5) and Eq. (14) to have the same structure, with the abstract operators being replaced by matrix operators. As a consequence, Eq. (5) will hold for finite difference and matrix versions of the and operators and the matrix operator counterpart of will be
Thus, to scale a function of a discrete variable, we need to write it as a column vector and multiply it with the scaling matrix . In order to obtain the scaling matrix, we need the first-order differentiation and coordinate multiplication matrices and and then compute the matrix exponential of the expression inside the parentheses. Therefore our first task is to obtain the and matrices.
For signals of discrete variables, the closest thing to differentiation is finite differencing. Consider the following definition:
If , then , since in this case the right-hand side approaches . Therefore, can be interpreted as a finite difference operator.
Note that if we let in the last equation and take the limit, we can verify that from here as well.
Now, we turn our attention to the task of defining . It is tempting to define the discrete version of the coordinate multiplication matrix by simply forming a diagonal matrix with the diagonal entries being equal to the coordinate values, with due adjustment for centering and discreteness, much as in Eq. (13). However, upon closer inspection we have decided that this could not be taken for granted. In order to obtain the most elegant and purest formulation possible, we must be sure to maintain the structural symmetry between and in all their manifestations. Therefore, we choose to define such that it is related to , in exactly the same way as is related to :
from which we can observe that as , we have , as should be. But beyond that, it is also possible to show that, , when defined like this, satisfies the same duality expression Eq. (5) satisfied by and :
To see this, substitute in this equation:
When acting on a continuous signal , the operator becomes
We observe the effect is not merely multiplying with the coordinate variable. Had we defined such that it corresponds to multiplication with the coordinate variable, we would have destroyed the symmetry and duality between and in passing to the discrete world.
Now, by sampling Eq. (21), we can obtain the matrix operator to act on finite discrete signals. The sample points will be taken as with the range of being determined by whether the number of sample is even or odd, and by whether we use the ordinary or centered sampling scheme, as explained in detail in Section 2. Finally, we are able to write the elements of the matrix :
The next step is to obtain the matrix. To do so, first recall that Eq. 5 can also be written as
Since we want the finite discrete manifestations of these abstract operators to also exhibit the same structure, we write
where was defined in Eq. (2). Thus, we have now obtained discrete matrix forms and of the coordinate multiplication and differentiation operators so we are finally in a position to calculate the discrete scaling operator defined in Eq. (15).
Before we move on to numerical results and interpretations, several comments will be in order. First of all, it will be worth recapitulating what we did and why. As mentioned, it is tempting to define the discrete version of the coordinate multiplication matrix by simply forming a diagonal matrix with the diagonal entries being equal to the coordinate values. Then one could also have easily obtained the discrete version of the differentiation matrix by using duality, without having to go through the circuitous route we followed. However, due to the circulant structure of the finite/periodic lattice associated with the DFT, we suspected this may not be true and decided to begin with the differentiation matrix instead. The simplest way to define the finite difference operator would be, instead of Eq. (16),
However, when discretized, the corresponding differentiation matrix would have values of along the primary diagonal and values of along the diagonal adjacent to the primary, leaving us with a matrix that is not symmetric. We rejected this option since it would clearly not give us a pure and elegant formulation, opting for Eq. (16) instead. However, this definition, while symmetric, did not allow us to immediately write a differentiation matrix, because it involved sample points in the middle of the sampling intervals, rather than the ends. Fortunately, the relationship Eq. (4) between and that we derived showed us the way to define . The operator did not exhibit the same problem of involving sample points in the middle that did, and could be discretized without difficulty. It was also symmetrical, as we desired it to be. Once we obtained the matrix, it was possible to use duality to obtain the matrix as well.
We believe that the presented way of defining the finite matrix forms of the coordinate multiplication and differentiation operators is the only way consistent with the circulant structure of the DFT and the dual nature of these operators.
5 Quantitative Discussion
In this section, we examine our formulation from a numerical perspective. We consider two different functions: a chirped pulse function , denoted by F1, and the trapezoidal function , denoted by F2 (). We considered three different values for the scale parameter : . Analytically-derived scaled versions for our two functions are taken as the comparison reference. We calculated normalized mean-square errors (MSE) between the following vectors: (i) Reference: Samples of the continuously scaled functions ; (ii) Discrete scaling: The product of the samples of the original function with the discrete scaling matrix. As explained in Section 2, results are calculated for both centered and ordinary sampling regimes. The number of samples are taken as 128, 256, and 512. Results are tabulated as percentages in Tables 1 to 2.
The results confirm that our approach for discrete scaling formulation approximates the continuous scaling reasonably well. If higher accuracies are needed, one can always increase and make a better approximation as the MSE decreases with increasing . This is because large means larger extents in both the time and frequency domains, so that a smaller percentage of the signal is left outside of these extents. Moreover, as expected, the MSE values also depend on the signal that is being scaled. Recalling the information theoretic considerations given in Section 2, the accuracy obtained depends on what percentage of the signal energy is confined within the chosen extents in the time and frequency domains. For example, MSE values for F2 are relatively higher than those of F1. This is caused by the fact that its frequency domain content is spread over a relatively greater extent, leading to a greater percentage of its energy to fall outside the chosen extents. As can be observed, use of either the centered or ordinary approaches gives similar results, as this choice does not make any essential difference with regards to accuracy.
In this paper, a formulation for scaling of discrete-time signals based on hyperdifferential operator theory is proposed. For finite-length signals of a discrete variable, a unitary scaling matrix is obtained so that the scaled version can be obtained by a direct matrix multiplication. Given the vector holding the samples of the unscaled signal, this scaling matrix multiplies the input vector to obtain the samples of the scaled signal. We also discussed two different approaches to indexing the discrete signals, namely ordinary indexing and centered indexing. These indexing approaches are fully consistent with the well-known ordinary and centered discrete Fourier transform (DFT) definitions. Furthermore, the proposed formulation is mathematically elegant, pure and uses self-consistent coordinate multiplication and differentiation operations. If needed, depending on the application, the accuracy of the resulting method can be improved by using coordinate multiplication and differentiation matrices that are obtained by brute force numerical approximations to the continuous domain. However, in this paper our purpose was to demonstrate these matrices in their purest forms without any numerical approximation.
We believe that we have obtained an elegant and pure formulation of discrete scaling based on self-consistent definitions of coordinate multiplication and differentiation operators. Our approach is consistent with the circulant nature of the discrete Fourier transform and also provides numerically satisfactory results.
-  Stuart Clary and Dale H. Mugler. Shifted fourier matrices and their tridiagonal commutors. SIAM Journal on Matrix Analysis and Applications, 24(3):809–821, 2003.
-  F.Alberto Grunbaum. The eigenvectors of the discrete fourier transform: A version of the hermite functions. Journal of Mathematical Analysis and Applications, 88(2):355 – 363, 1982.
-  D. H. Mugler. The centered discrete fourier transform and a parallel implementation of the fft. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1725–1728, May 2011.
-  D. H. Mugler, S. Clary, and Y. Wu. Discrete hermite expansion of digital signals: Applications to ecg signals. In IEEE Signal Process. Soc. 10th DSP Workshop, pages 262–267, 2002.
-  M. Nazarathy and J. Shamir. Fourier optics described by operator algebra. J. Opt. Soc. Am., 70:150–159, 1980.
-  Alan V. Oppenheim and Ronald W. Schafer. Digital Signal Processing. Prentice-Hall, Inc., 1975.
-  Alan V. Oppenheim and Ronald W. Schafer. Discrete-time Signal Processing (3rd Ed.). Pearson Education Limited, 2013.
-  Alan V. Oppenheim, Alan S. Willsky, and S. Hamid Nawab. Signals & Systems (2nd Ed.). Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1996.
-  H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay. The Fractional Fourier Transform with Applications in Optics and Signal Processing. New York: Wiley, 2001.
-  Haldun M. Ozaktas, M. Alper Kutay, and Cagatay Candan. Transforms and Applications Handbook, chapter Fractional Fourier Transform, pages 14–1–14–28. CRC Press, Boca Raton, New York, NY, 2010.
-  S. C. Pei, J. J. Ding, W. L. Hsue, and K. W. Chang. Generalized commuting matrices and their eigenvectors for dfts, offset dfts, and other periodic operations. Signal Processing, IEEE Transactions on, 56(8):3891 – 3904, 2008.
-  S. C. Pei and Y. Lai. Signal scaling by centered discrete dilated hermite functions. IEEE Trans. Signal Process., 60:498–503, 2012.
-  J. G. Vargas-Rubio and B. Santhanam. On the multiangle centered discrete fractional fourier transform. IEEE Signal Process. Lett., 12(4):273–276, 2005.
-  K. B. Wolf. Integral Transforms in Science and Engineering (Chapter 9: Construction and properties of canonical transforms). New York: Plenum Press, 1979.
-  K. Yosida. Operational Calculus: A Theory of Hyperfunctions. Springer, New York, USA, 1984.