On the Decomposition of Multivariate Nonstationary Multicomponent Signals
Abstract
With their ability to handle an increased amount of information, multivariate and multichannel signals can be used to solve problems normally not solvable with signals obtained from a single source. One such problem is the decomposition signals with several components whose domains of support significantly overlap in both the time and the frequency domain, including the joint timefrequency domain. Initially, we proposed a solution to this problem based on the Wigner distribution of multivariate signals, which requires the attenuation of the crossterms. In this paper, an advanced solution based on an eigenvalue analysis of the multivariate signal autocorrelation matrix, followed by their timefrequency concentration measure minimization, is presented. This analysis provides less restrictive conditions for the signal decomposition than in the case of Wigner distribution. The algorithm for the components separation is based on the concentration measures of the eigenvector timefrequency representation, that are linear combinations of the overlapping signal components. With an increased number of sensors/channels a robustness of the decomposition process to additive noise is also achieved. The theory is supported by numerical examples. The required channel dissimilarity is statistically investigated as well.
keywords:
Multivariate and multichannel noisy signals, timefrequency signal analysis, robust signal decomposition, concentration measure.1 Introduction
It is well established that the use of the conventional Fourier analysis for the characterization and processing of signals with timevarying spectra is quite limited dekompozicija0 ; dekompozicija_dsp ; dekompozicija_meco ; dekompozicija ; dekompozicija2 ; dekompozicijaLFM ; dekompozicija3 ; dekompozicija4 ; knjiga ; VKLJS1 ; LJSMDTTB ; mandic . During the last few decades, these constraints have inspired the development of various powerful algorithms and approaches within the timefrequency signal analysis framework knjiga .
Traditional timefrequency analysis deals with univariate signals, frequently characterized through amplitude and frequency modulated oscillations knjiga , mandic . The shorttime Fourier transform (STFT) and the Wigner distribution (WD) are commonly used timefrequency representations. In practice, signals are usually multicomponent, meaning that they can be represented as linear combinations of individual signals (components). Owning to its many desirable properties, Wigner distribution has been the basis of many instantaneous frequency (IF) estimators, developed to capture and describe frequency oscillations knjiga ; VKLJS1 ; LJSMDTTB . However, undesirable components, known as crossterms, do appear in the Wigner distribution of multicomponent signals. With the intention to keep desirable properties of the STFT and high concentration of the WD, the Smethod is developed as an alternative timefrequency representation, balancing between the previous two knjiga .
For an independent characterization, each signal (component) in a multicomponent signal should be separated from others and individually analyzed dekompozicija ; dekompozicija2 ; dekompozicijaLFM ; dekompozicija3 . Such decomposition of multicomponent signals on individual components is possible for univariate signals by means of the algorithm originally presented in dekompozicija , which is based on the Smethod. This type of decomposition is possible under the condition that timefrequency supports of individual components do not overlap. In the univariate case, in general, it is not possible to separate overlapped signal components, except for some very specific and a priori known/assumed signal forms, such as linear frequency modulated signals – using chirplet transform dekompozicija_cirplet or Radon transform dekompozicija_radon , or sinusoidally modulated signals – using inverse Radon transform iradon1 , iradon2 .
Recently, new perspectives for the multicomponent signal decomposition have appeared, in light of the multivariate signal paradigm dekompozicija0 . Multivariate (multichannel) data have been largely available lately, as a result of new developments in the sensor technology. With the aim to exploit multichannel signal interdependence through a joint timefrequency analysis, concepts of modulated bivariate and trivariate data oscillations appeared first, followed by the generalization of the concept to an arbitrary number of channels mandic ; TSPMV ; boashmv ; bivariate . The joint IF concept has been proposed in TSPMV , as a characterization of multichannel data obtained by capturing combined frequency in all individual channels. The IF of a multivariate signal is defined as a weighted average of the IFs in all individual channels. Following the foundations of these basic timefrequency concepts for the multichannel data, synchrosqueezed transform has been reintroduced within the multivariate framework mandic . Furthermore, the wavelet ridge algorithm, as a tool for the extraction of local oscillatory dynamics of multivariate signal, is also defined for multivariate signals TSPMV . Within the multivariate framework, significant research has also been conducted with the aim to place the empirical mode decomposition within the multivariate context emd1 emd5 . Interestingly enough, this type of decomposition is possible only in the case of components which do not overlap in the timefrequency plane, even in the multivariate case.
Multivariate Wigner distribution has been the basis of the recently proposed approach for the decomposition of multivariate multicomponent signals dekompozicija0 . Exploiting the significant reduction of undesirable crossterms due to the multichannel signal nature, this method provides the possibility to efficiently extract the components with overlapped supports in the timefrequency domain, something that was not, in general, possible for univariate signals, using any known decomposition procedure. In particular, the autocorrelation matrix of Wigner distribution is decomposed into eigenvectors. Using a steepest descent approach dekompozicija0 , they are linearly combined to form the extracted components. Besides the possibility to separate overlapped components, it has been even possible to apply the decomposition procedure to extract the IF of realvalued multichannel signals with amplitude variations proportional to phase variations dekompozicija_dsp . The influence of channel phase differences (in the bivariate case) is analyzed in dekompozicija_meco .
In this paper, the decomposition procedure is performed starting directly from a realization of signal autocorrelation matrix. This leads to less restrictive signal decomposition conditions, compared to the case of the decomposition based on multivariate Wigner distribution. It is shown that the eigenvectors of the analyzed matrix contains linear combinations of components overlapped in the timefrequency plane. These components are then extracted by minimizing the concentration of the linear combinations of eigenvectors. Numerical results verify the presented theory, with a special emphasis on robustness in noisy conditions and its relation to the number of channels. Overlapped components appear in various signal processing applications, such as in radar signal processing dekompozicija0 , multiple antenna systems R1 , some biomedical signals etc, to mention but a few.
The paper organization is as follows. After a short overview of the background theory and basic definitions, Section 2 continues with the detailed analysis of multivariate multicomponent signals. In this section, the attention is devoted to the eigenvectors of signal autocorrelation matrix and their relations with signal components. Section 3 presents the multivariate multicomponent signal decomposition approach, founded on the minimization of the concentration measure. Numerical results are given in Section 4, while the paper ends with concluding remarks.
2 Multivariate Multicomponent Signals
Discretetime signals of the form
(1) 
obtained by measuring a complexvalued signal with sensors, are known as complex multivariate signals. The amplitude and phase of the original signal are modified by each sensor, to give . In the case of realvalued measured signal, its analytic extension
is commonly used, with being the realvalued measured signal and its Hilbert transform. The analytic signal contains only nonnegative frequencies and the realvalued counterpart can be reconstructed. This form of signal is especially important in the instantaneous frequency interpretation within the timefrequency moments framework.
Consider a multivariate signal obtained by sensing a monocomponent signal of the form . The value of this signal measured at a sensor can be written as
A realvalued form of this multivariate signal takes the form . According to Bedrosian’s product theorem BB , the complex analytic signal is a valid representation of the real amplitudephase signal if the spectrum of is nonzero only within the frequency range and the spectrum of occupies an nonoverlapping (much) higher frequency range. A signal is monocomponent if is slowvarying as compared to variations. The signal model with slow amplitude variations, as compared to the phase variations, has been often considered in literature gabor ; man2 ; man3 ; man4 ; man5 ; man6 ; man7 .
However, in general, for the case of multicomponent signals, the components are localized along more than one instantaneous frequency.
2.1 Multivariate and Multicomponent Signals
Consider a multicomponent discretetime signal
(2) 
with components of the form
(3) 
where the component amplitudes have a slowvarying dynamics as compared to the variations of the phases . Assume that components are independent signals, i.e., that no component can be written as a linear combination of other components (for all considered time instants ). The corresponding multivariate signal is then given by
(4) 
Signal in the th channel, denoted by , is obtained as a linear combination of the signal components multiplied with complex constants , , , to give
(5) 
We will introduce the notation
for the matrix that transforms the signal components to the measured signal.
Observation: The maximum number of independent channels , , in is
(6) 
The proof is evident since the transformation matrix in (4) is an matrix with .
Note that if the maximum number of independent channels , , is equal to the number of sensors , while if the maximum number of independent channels is equal to the number of components .
A matrix form of the previous relation between signals measured on sensors and signal components is
(7) 
or
where is an matrix of sensed signal values with elements and is a matrix of signal component samples with elements .
The autocorrelation matrix of the sensed signal is defined by
(8) 
where denotes the Hermitian transpose. The elements of this matrix are
(9) 
where is the column vector of sensed values at a given instant .
The matrix can be used for the analysis and characterization of multicomponent multivariate signals. It is also the starting point of the decomposition algorithm for multicomponent signals presented in this paper.
Note that the sensed values are the linear combinations of the signal components. Although the decomposition could be performed directly, based on the sensed signals, it would not be computationally efficient for that is case common in the analysis. The efficiency is improved using the matrix eigendecomposition of the autocorrelation matrix . Some properties of this decomposition, needed for the analysis of multicomponent signals, will be reviewed next.
2.2 Eigenvectors and Linear Combination of Vectors
For any square matrix, the eigenvalue decomposition of a dimensional matrix gives
(10) 
where are the eigenvalues and are the corresponding eigenvectors of . Matrix is a diagonal matrix with eigenvalues on the main diagonal whereas the matrix is formed from eigenvectors as . Note that the eigenvectors are orthonormal.
Remark 1: Consider a set of nonorthogonal vectors , . If a matrix is defined by
(11) 
then finding the eigenvectors of this matrix can be considered as the process of the orthogonalization of the space defined by vectors whose energies are .
Note this particular form of matrix is obtained for the multicomponent multivariate overlapping signals, since the elements of matrix in (8) are calculated as .
The previous remark will be illustrated considering the cases with , , and an arbitrary .

If then the orthogonalization over is not needed. In this case, the eigenvector of matrix . This case appears exactly if the Wigner distribution is used in univariate signals. This property is used in the synthesis of signals with a given Wigner distribution.

For , the orthogonalization of the space defined by and is performed. In this case, the eigenvectors, , , as the orthogonal vectors over this space, can be written as two linear combinations of and , that define matrix in (11), that is
In order two prove this property we will start from definition (11)
We assumed that the eigenvector is of the form . The eigenvector of matrix satisfies the relation . Since
where , we can obtain a system
From this system of equations we can find , , and , based on , , and with additional condition that . The same holds for .

This proof can be generalized for any .
From this relation and
with follows the system
From this system we may find values of and . Note that and .
Remark 2: Assume that
and that is not an independent vector, but a linear combination of and , then
reduces to
It means that a new dependent vector will not increase the dimensionality of the eigenvector space, and it will reduce to a linear combination of the independent vectors, with new coefficients.
Remark 3: If the vectors , , …, are linear combinations of another set of independent vectors , , …, then the eigenvectors as the linear combinations , , …, are also the linear combinations of , , …, . For , in the matrix form, for two vectors
Therefore, the eigenvectors are linear combinations of , , …, .
Remark 4: If the number of independent vectors , , …, is and , , …, , are their linear combinations with , then only vectors are linearly independent. This means that only eigenvectors can be formed in this basis.
2.3 Eigenvectors as Linear Combinations of the Signal Components
The previous remarks represent an analysis platform for our multivariate and multicomponent signal defined by (4). The vectors that form the matrix are formed as the following linear combinations of the signal component vectors
with the elements
The eigenvalue decomposition is then given by
(12) 
where the eigenvectors are linear combinations of and these components are linear combinations of the signal components. In other words
(13) 
with .
Consider the case when the signal components overlap in the frequency plane. In this case, the decomposition on the individual components is not possible using the stateofart methods, except in cases of quite specific signal forms (such as linear frequency modulated signals, using chirplet transform, Radon transform or similar techniques dekompozicija_radon , dekompozicija_cirplet , or for sinusoidally modulated signals using inverse Radon transform, iradon1 , iradon2 ). In general, these kinds of signals cannot be separated into individual components in the univariate case. However, the multivariate form of signals offers a possibility to decompose the components which overlap in the timefrequency plane.
3 Decomposition Principle
We have concluded that the eigenvectors of matrix are formed as linear combinations of the signal components in (13). Assume now that the number of sensors is such that . Then there are independent linear relations for components. We may conclude that, in principle, the signal component can be also be written as linear combination of eigenvectors
with unknown weights .
We will consider signal with nonstationary components , . Each component has a support in the timefrequency domain denoted by . For components with partial overlapping, both in time and frequency, the supports also partially overlap. The case with the complete overlapping of two supports is excluded from this analysis. Assume the notation such that , where is the area of the support .
The aim of this paper is to decompose the original signal, using the eigenvectors, , of autocorrelation matrix , and to obtain the individual signal components , , by linearly combining the eigenvectors . To meet this aim, we will use timefrequency representations and corresponding concentration measures. Since the form of timefrequency representation is not crucial here, we will use the shorttime Fourier transform (STFT),
(14) 
to measure the concentration of signals in the timefrequency domain, and the pseudo Wigner distribution
(15) 
to visualize the results, that is, for a high resolution presentation of the initial signal, eigenvectors and the resulting signal components. Note that denotes a window of length in (14) and (15).
An norm based measure of the timefrequency concentration, with , will be used. It is originally introduced in XXX as
(16)  
(17) 
where is the spectrogram.
In theory, a direct way to solve the problem of eigenvectors decomposition to the signal components would be to form a linear combination of the eigenvectors
(18) 
with varying coefficients , keeping , and to use the zeronorm as the concentration measure. This norm would produce the area of the support for the analyzed signal. If all signal components are present in the signal , then its zeronorm would produce the area of . By changing the coefficients , the minimum value of the concentration measure is achieved when the coefficients are matched to the best concentrated signal component coefficients with the smallest support area
If any two the smallest areas are equal, we will still find one of them. In practice, the normone of the STFT could be used to achieve the robustness to noise
(19) 
Note that this minimization problem has several local minima as the coefficients in which correspond to any signal component will also produce a local minimum of the concentration measure, equal to the area of corresponding component support. In addition, any linear combination of signal components will also produce a local minimum equal to the area of the union of the supports of included signal components. Note that if the lowest local minima correspond to , , …, , then we can detect the coefficients for all signal components.
As several local minima exist, multicomponent decomposition should be performed iteratively. Initially, the matrix with elements (9) is calculated as in (8). Its eigendecomposition produces eigenvectors , and based on them, signal
is formed, with weighting coefficients , which are varied to solve the minimization problem (19). The STFT in (19) is calculated for the normalized signal . Here, we can assume that the minimization (19) is performed by the direct search over the parameter space.
Upon finding the concentration measure minimum, the eigenvector is replaced with the signal , formed using the weighting coefficients corresponding to the minimum of concentration measure (19). Then, this signal is removed from the remaining eigenvectors, by removing its projection to these eigenvectors. In other words, the eigenvectors , are modified as follows:
(20) 
in order to ensure that it is not detected again.
This procedure is iterated times. This means that in the th iteration, based on eigenvectors modified in the previous iteration, new signal
(21) 
is formed. The weighting coefficients are varied, to find the new set which minimizes the concentration measure
of the spectrogram calculated for normalized current signal . The th eigenvector is replaced by , while the signal deflation SDP is performed by subtracting the projection of the detected component from remaining eigenvectors :
(22) 
The described procedure is repeated until there is no more updates of vectors . Vectors are sorted according to their concentration measure, after each iteration. The iterative procedure is stopped when there is no updates of vectors .
The search in the space of parameters , in order to minimize the measure can be performed directly, which is numerically inefficient, or by using more sophisticated methods, such as the iterative gradient minimization procedure presented in dekompozicija0 . Other global optimization methods, including heuristic algorithms  ant colony optimization ANT , genetic algorithm, hill climbing HC , simulated annealing SA , and also, using some deterministic Deter or stohastic procedures SPST ; SPST1 , can be also used for the concentration measure minimization. However, this is out of the scope of this paper.
3.1 Specific Cases
When the components do not overlap in the timefrequency plane, they are orthogonal. If the number of sensors is greater or equal to the number of components, , then the components are equal to the eigenvectors (up to their amplitudes) and the decomposition directly follows. In sense of the previous equations it means that we can use for .
This problem can be solved even if single signal channel is available, , by using timefrequency representation of the signal which produces the crossterms free Wigner distribution – the Smethod, dekompozicija .
The case of combined overlapping and nonoverlapping components, can be solved with at least sensors, as shown in dekompozicija0 .
4 Numerical Examples
This section supports the theory by on numerical examples. In Examples 13, a realvalued discretetime bivariate signal with overlapping components is considered with various noise amounts (variances). This set of examples confirms the fact that in order to perform an efficient decomposition in noisy cases – the number of channels should be increased, compared with the noiseless scenario. In Examples 45, a very complex signal of nine overlapping components is considered, corrupted with noise with two different levels. The analysis is concluded with a statistical test which will illustrate how the ability to separate the components depends on the noise variance and the number of channels.
Example 1: Consider a discretetime bivariate signal of the form . The minimum required number of sensors for this signal, , is used. Signal from the channel is of the form
(23) 
for and , as shown in Fig. 1. The components of this signal are
(24) 
Timefrequency representation of this signal with two very close components is shown in Fig. 1(a). The eigenvectors of the authocorrelation matrix indicate that there are two signal components, as shown in Fig. 1(b). The two eigenvectors corresponding to the largest eigenvalues are presented in Fig. 1(c)(d). These two eigenvectors are decomposed into two signal components with minimum concentration measures, as described in the previous section. The decomposition results are shown in Fig. 1(e)(f), and they fully correspond to the timefrequency representation of the individual signal components in (24).
Example 2: The signal from Example 1 is corrupted by a moderate level of additive noise, to give . The standard deviation of noise is . Here, we were not able to reconstruct the signal with the minimum number of sensors. To achieve a stable reconstruction, the number of sensors is increased to . The timefrequency representation of the original noisy signal, eigenvalues, timefrequency representation of the eigenvectors, and the timefrequency representation of the obtained signal components are shown in Fig. 2.
Example 3: In this case the noise intensity is increased to the signal level using To achieve robustness of the results, the number of sensors had to be increased. Noisy signal timefrequency representation, along with eigenvalues, timefrequency representation of the eigenvectors, and the timefrequency representation of the signal components are presented in Fig. 3.
Example 4: In this example, a signal with a large number of overlapped components is considered. The minimum number of sensors, required for the successful decomposition is in this case. Since a small noise is added, with , and the measured signal phases are random, the signal is reconstructed with a small margin in the number of sensors . From the timefrequency representation of components, presented in Fig. 4(a)(b), we can see that the components overlapping is significant. Components cannot be recognized neither from the Wigner distribution nor from the spectrogram with an adjusted window. Their eigenvalues of the autocorrelation matrix are shown in Fig. 4(c). The timefrequency representation of the strongest eigenvectors are presented in Fig. 5. Using these eigenvectors the signal is decomposed into components, as shown in Fig. 6.
Example 5: A noisy signal, as in Example 4, with components is analyzed here. In addition to the random different phases in each sensor, a random amplitude change is assumed as well. The coefficients in (4) defined by , are here used in the form , where the random variable assume the values within and the variable is uniformly distributed over the interval from to . The decomposition results are presented in the same way as in the previous figures. Timefrequency representations obtained using the Wigner distribution and the spectrogram are given in Fig. 7, along with the eigenvalues of the autocorrelation matrix. The timefrequency representations of the 9 strongest eigenvectors are shown in Fig. 8. The linear combinations of the eigenvectors are done according to the presented algorithm and the final results for the signal components can be seen in Fig. 9.
Finally, a statistical test is run for the noisy component signal from the last example. The eigenvalues are calculated in random realizations and presented in Fig. 10 for two values of the additive noise variance and Ability to clearly separate the strongest eigenvectors, corresponding to the linear combinations of the signal components, from the background noise is a good indicator when the presented algorithm can successfully be applied. For the value of variance we can conclude that the number of sensors would be sufficient, while the same separation gap is obtained for with This indicator is verified against the reconstruction check for these scenarios.
5 Conclusion
This work presents new contributions to the most challenging topic in multicomponent signal decomposition in the case of components for which the supports are overlapped in the timefrequency plane. The decomposition concepts have been investigated starting directly from the signal autocorrelation matrix of the input, whose eigenvectors can be linearly combined to form individual signal components. The decomposition procedure based on the presented theory has been evaluated through several numerical examples, and has conclusively verified the presented theory and the decomposition efficacy. For rigor, the robustness of the procedure, against the influence of an additive noise, has been studied from the perspective of the degrees of freedom, that is, number of sensors (channels) required to achieve a stable separation of signal components.
References
 (1) L. Stanković, D. Mandic, M. Daković, and M. Brajović, “Timefrequency decomposition of multivariate multicomponent signals,” Signal Processing, Volume 142, January 2018, pp. 468479, http://dx.doi.org/10.1016/j.sigpro.2017.08.001
 (2) L. Stanković, M. Brajović, M. Daković, and D. Mandic, “Twocomponent Bivariate Signal Decomposition Based on TimeFrequency Analysis,”22nd International Conference on Digital Signal Processing IEEE DSP 2017, August 2325, London, United Kingdom
 (3) M. Brajović, L. Stanković, M. Daković, and D. Mandic, “Additive Noise Influence on the Bivariate TwoComponent Signal Decomposition,” 7th Mediterranean Conference on Embedded Computing, MECO 2018, Budva, Montenegro, June 2018.
 (4) L. Stanković, T. Thayaparan, and M. Daković, “Signal Decomposition by Using the SMethod with Application to the Analysis of HF Radar Signals in SeaClutter,”IEEE Transactions on Signal Processing, Vol.54, No.11, Nov. 2006, pp.4332 4342
 (5) Yinsheng Wei, Shanshan Tan, “Signal decomposition by the Smethod with general window functions,”Signal Processing, Volume 92, Issue 1, January 2012, Pages 288293.
 (6) Yang, Yang, Xingjian Dong, Zhike Peng, Wenming Zhang, and Guang Meng. “Component extraction for nonstationary multicomponent signal using parameterized dechirping and bandpass filter,” IEEE SP Letters, vol. 22, no. 9 (2015): 13731377.
 (7) Y. Wang and Y. Jiang, “ISAR Imaging of Maneuvering Target Based on the LClass of FourthOrder ComplexLag PWVD,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 3, pp. 15181527, March 2010.
 (8) I. Orović, S. Stanković, and A. Draganić,“TimeFrequency Analysis and Singular Value Decomposition Applied to the Highly Multicomponent Musical Signals,”Acta Acustica United With Acustica, Vol. 100 (2014) 1,
 (9) L. Stanković, M. Daković, T. Thayaparan,TimeFrequency Signal Analysis with Applications, Artech House, Mar. 2013
 (10) V. Katkovnik, L. Stanković, “Instantaneous frequency estimation using the Wigner distribution with varying and data driven window length,” IEEE Trans. on Signal Processing, Vol.46, No.9, Sep.1998, pp.23152325.
 (11) V.N. Ivanović, M. Daković, L. Stanković, “Performance of Quadratic TimeFrequency Distributions as Instantaneous Frequency Estimators,”IEEE Trans. on Signal Processing, Vol. 51, No. 1, Jan. 2003, pp.7789
 (12) A. Ahrabian, D. Looney, L. Stanković, and D. Mandic, “SynchrosqueezingBased TimeFrequency Analysis of Multivariate Data,”Signal Processing, Volume 106, January 2015, Pages 331–341.
 (13) G. LopezRisueno, J. Grajal and O. YesteOjeda, “Atomic decompositionbased radar complex signal interception,”IEE Proceedings  Radar, Sonar and Navigation, vol. 150, no. 4, pp. 32331, 1 Aug. 2003.
 (14) J. C. Wood and D. T. Barry, “Radon transformation of timefrequency distributions for analysis of multicomponent signals,”IEEE Transactions on Signal Processing, vol. 42, no. 11, pp. 31663177, Nov 1994.
 (15) L. Stanković, M. Daković, T. Thayaparan, and V. PopovićBugarin, “Inverse Radon Transform Based MicroDoppler Analysis from a Reduced Set of Observations,” IEEE Transactions on AES, Vol. 51, No. 2, pp.11551169, April 2015
 (16) M. Daković, and L. Stanković, “Estimation of sinusoidally modulated signal parameters based on the inverse Radon transform,”ISPA 2013, Trieste, Italy, 46 September 2013, pp. 302307
 (17) J. M. Lilly and S. C. Olhede, “Analysis of Modulated Multivariate Oscillations,”, IEEE Transactions on Signal Processing, vol. 60, no. 2, pp. 600612, Feb. 2012.
 (18) A. Omidvarnia, B. Boashash, G. Azemi, P. Colditz and S. Vanhatalo, “Generalised phase synchrony within multivariate signals: An emerging concept in timefrequency analysis,”IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 34173420, Kyoto, 2012
 (19) J. M. Lilly and S. C. Olhede, “Bivariate Instantaneous Frequency and Bandwidth,”IEEE Transactions on Signal Processing, vol. 58, no. 2, pp. 591603, Feb. 2010.
 (20) B. Boashash, “Estimating and interpreting the instantaneous frequency of a signal. I. Fundamentals,”Proceedings of the IEEE, vol. 80, no. 4, pp. 520538, Apr 1992. doi: 10.1109/5.135376
 (21) D. P. Mandic, N. u. Rehman, Z. Wu, N. E. Huang, “Empirical Mode DecompositionBased TimeFrequency Analysis of Multivariate Signals: The Power of Adaptive Data Analysis,”IEEE Signal Processing Magazine, vol.30, pp.7486, Nov. 2013.
 (22) S. M. U. Abdullah, N. u. Rehman, M. M. Khan, D. P. Mandic, “ A Multivariate Empirical Mode Decomposition Based Approach to Pansharpening,”IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no.7, pp. 39743984, July 2015.
 (23) A. Hemakom, A. Ahrabian, D. Looney, N. u. Rehman, D. P. Mandic, “Nonuniformly sampled trivariate empirical mode decomposition,”IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2015), South Brisbane, QLD, 2015, pp. 36913695.
 (24) G. Wang, C. Teng, K. Li, Z. Zhang and X. Yan, “The Removal of EOG Artifacts From EEG Signals Using Independent Component Analysis and Multivariate Empirical Mode Decomposition,”IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 5, pp. 13011308, Sept. 2016.
 (25) S. Tavildar and A. Ashrafi, “Application of multivariate empirical mode decomposition and canonical correlation analysis for EEG motion artifact removal,” 2016 Conference on Advances in Signal Processing (CASP), Pune, 2016, pp. 150154.
 (26) Y. Zhang, M. G. Amin, B. A. Obeidat, “Polarimetric Array Processing for Nonstationary Signals,” in Adaptive Antenna Arrays: Trends and Applications edited by S. Chandran, Springer, 2004, pp. 205218.
 (27) D. Gabor, “Theory of communication. Part 1: The analysis of information,”Journal of the Institution of Electrical Engineers  Part III: Radio and Communication Engineering, vol. 93, no. 26, pp. 429–441, 1946.
 (28) D. Wei and A. C. Bovik, “On the instantaneous frequencies of multicomponent AMFM signals,”IEEE Signal Processing Letters, vol. 5, no. 4, pp. 84–86, April 1998.
 (29) J. Brown, “Analytic signals and product theorems for Hilbert transforms,”IEEE Transactions on Circuits and Systems, vol. 21, no. 6, pp. 790792, November 1974.
 (30) D. Vakman, L.A. Vaĭshteĭn, “Amplitude, phase, frequency—fundamental concepts of oscillation theory,” Soviet Physics Uspekhi, vol. 20, no. 12, pp. 1002–1016, 1977
 (31) A. H. Nuttall and E. Bedrosian, “On the quadrature approximation to the Hilbert transform of modulated signals,” Proceedings of the IEEE, vol. 54, no. 10, pp. 14581459, Oct. 1966.
 (32) A. W. Rihaczek and E. Bedrosian, “Hilbert transforms and the complex representation of real signals,”Proceedings of the IEEE, vol. 54, no. 3, pp. 434435, March 1966.
 (33) B. Picinbono, “On instantaneous amplitude and phase of signals, ”IEEE Transactions on Signal Processing, vol. 45, no. 3, pp. 552560, March 1997.
 (34) L. Stanković, “A measure of some time–frequency distributions concentration,” Signal Processing, vol. 81, 2001, pp. 621–631.
 (35) A. Cichocki, S. Amari, Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications, vol. 1, John Wiley and Sons, 2002, pp. 191–193.
 (36) W. Hu, K. Wu, P.P. Shum, N. I. Zheludev, C. Soci, ”AllOptical Implementation of the Ant Colony Optimization Algorithm”, Scientific Reports, vol. 6, 2016, doi: 10.1038/srep26283.
 (37) R. Chelouaha, P. Siarry, ”Genetic and NelderMead algorithms hybridized for a more accurate global optimization of continuous multiminima functions”, European Journal of Operational Research, Vol. 148, Issue 2, 16 July 2003, Pages 335348.
 (38) S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, ”Optimization by Simulated Annealing”, Science, 13 May 1983, Vol. 220, Issue 4598, pp. 671680, doi: 10.1126/science.220.4598.671
 (39) A. Neumaier, ”Complete Search in Continuous Global Optimization and Constraint Satisfaction”, Acta Numerica, 13(1), 2004
 (40) J. C. Spall, Introduction to Stochastic Search and Optimization, Wiley. ISBN 0471330523.
 (41) J. Larson, S.M. Wild, ”A batch, derivativefree algorithm for finding multiple local minima”, Optimization Engineering, March 2016, Volume 17, Issue 1, pp 205228, doi:10.1007/s1108101592897