An efficient methodology to estimate the parameters of a twodimensional chirp signal model
Abstract
Abstract: In various capacities of statistical signal processing twodimensional (2D) chirp models have been considered significantly, particularly in image processing to model grayscale and texture images, magnetic resonance imaging, optical imaging etc. In this paper we address the problem of estimation of the unknown parameters of a 2D chirp model under the assumption that the errors are independently and identically distributed (i.i.d.). The key attribute of the proposed estimation procedure is that it is computationally more efficient than the least squares estimation method. Moreover, the proposed estimators are observed to have the same asymptotic properties as the least squares estimators, thus providing computational effectiveness without any compromise on the efficiency of the estimators. We extend the propounded estimation method to provide a sequential procedure to estimate the unknown parameters of a 2D chirp model with multiple components and under the assumption of i.i.d. errors we study the large sample properties of these sequential estimators. Simulation studies and a synthetic data analysis show that the proposed estimators perform satisfactorily.
1 Introduction
A twodimensional (2D) chirp model has the following mathematical expression:
(1) 
Here, is the observed signal data, and the parameters s, s are the amplitudes, s, s are the frequencies and s, s are the frequency rates. The random component accounts for the noise component of the observed signal. In this paper, we assume that is an independently and identically distributed (i.i.d.) random field.
It can be seen that the model admits a decomposition of two components the deterministic component and the random component. The deterministic component represents a grayscale texture and the random component makes the model more realistic for practical realisation. For illustration, we simulate data with a fixed set of model parameters. Figure 2 represents the grayscale texture corresponding to the simulated data without the noise component and Figure 2 represents the contaminated texture image corresponding to the simulated data with the noise component. This clearly suggests that the 2D chirp signal models can be used effectively in modelling and analysing black and white texture images.
Apart from the applications in image analysis, these signals are commonly observed in mobile telecommunications, surveillance systems, in radars and sonars etc. For more details on the applications, one may see the works of Francos and Friedlander [6], [7], Simeunovi and Djurovi [14] and Zhang et al. [8] and the references cited therein.
Parameter estimation of a 2D chirp signal is an important statistical signal processing problem. Recently Zhang et al. [8], Lahiri et al. [11] and Grover et al. [18] proposed some estimation methods of note. For instance, Zhang et al. [8] proposed an algorithm based on the product cubic phase function for the estimation of the frequency rates of the 2D chirp signals under low signal to noise ratio and the assumption of stationary errors. They conducted simulations to verify the performance of the proposed estimation algorithm, however there was no study of the theoretical properties of the proposed estimators. Lahiri et al. [13] suggested the least squares estimation method. They observed that the least squares estimators (LSEs) of the unknown parameters of this model are strongly consistent and asymptotically normally distributed under the assumption of stationary additive errors. The rates of convergence of the amplitude estimates were observed to be , of the frequencies estimates, they are and and of the frequency rate estimates, they are and . Grover et al. [18] proposed the approximate least squares estimators (ALSEs), obtained by maximising a periodogramtype function and under the same stationary error assumptions, they observed that ALSEs are strongly consistent and asymptotically equivalent to the LSEs.
A chirp signal is a particular case of the polynomial phase signal when the phase is a quadratic polynomial. Although work on parameter estimation of the aforementioned 2D chirp model is rather limited, several authors have considered the more generalised version of this modelthe 2D polynomial phase signal model. For references, see Djurović et al. [10], Djurović [16], Francos and Friedlander [6, 7], Friedlander and Francos [5], Lahiri and Kundu [15], Simeunović et al. [12], Simeunovi and Djurovi [14] and Djurović and Simeunović [19].
In this paper, we address the problem of parameter estimation of a onecomponent 2D chirp model as well as the more general multiplecomponent 2D chirp model. We put forward two methods for this purpose. The key characteristic of the proposed estimation method is that it reduces the foregoing 2D chirp model into two 1D chirp models. Thus, instead of fitting a 2D chirp model, we are required to fit two 1D chirp models to the given data matrix. For the fitting, we use a simple modification of the least squares estimation method. The proposed algorithm is numerically more efficient than the usual least squares estimation method proposed by Lahiri et al. [13]. For instance, for a onecomponent 2D chirp model, to estimate the parameters using these algorithms, we need to solve two 2D optimisation problems as opposed to a 4D optimisation problem in the case of finding the LSEs. This also leads to curtailment of the number of grid points required to find the initial values of the nonlinear parameters as the 4D grid search required in case of the computation of the usual LSEs or ALSEs, reduces to two 2D grid searches. Therefore, instead of searching along a grid mesh consisting of points, we need to search among only points, which is much more feasible to execute computationally. In essence, the contributions of this paper are threefold:

We put forward a computationally efficient algorithm for the estimation of the unknown parameters of 2D chirp signal models as a practical alternative to the usual least squares estimation method.

We examine the asymptotic properties of the proposed estimators under the assumption of i.i.d. errors and observe that the proposed estimators are strongly consistent and asymptotically normally distributed. In fact, they are observed to be asymptotically equivalent to the corresponding LSEs. When the errors are assumed to be Gaussian, the asymptotic variancecovariance matrix of the proposed estimators coincides with asymptotic CramérRao lower bound.

We conduct simulation experiments and analyse a synthetic texture (see Figure 2) to assess the effectiveness of the proposed estimators.
The rest of the paper is organised as follows. In the next section, we provide some preliminary results required to study the asymptotic properties of the proposed estimators. In Section 3, we consider a onecomponent 2D chirp model and state the model assumptions, some notations and present the proposed algorithms along with the asymptotic properties of the proposed estimators. In Section 4, we extend the algorithm and develop a sequential procedure to estimate the parameters of a multiplecomponent 2D chirp model. We also study the asymptotic properties of the proposed sequential estimators in this section. We perform numerical experiments for different model parameters in Section 5.1 and analyse a synthetic data for illustration in Section 5.2. Finally, we conclude the paper in Section 6 and we provide the proofs of all the theoretical claims in the appendices.
2 Preliminary Results
In this section, we provide the asymptotic results obtained for the usual LSEs of the unknown parameters of a 1D chirp model by Lahiri et al. [13]. These results are later exploited to prove the asymptotic normality of the proposed estimators.
2.1 Onecomponent 1D Chirp Model
Consider a 1D chirp model with the following mathematical expression:
(2) 
Here is the observed data at time points , , are the amplitudes and is the frequency and is the frequency rate parameter. is the sequence of error random variables.
The LSEs of and can be obtained by minimising the following reduced error sum of squares:
where,
is the error sum of squares, is the projection matrix on the column space of the matrix ,
(3) 
is the observed data vector and is the the vector of linear parameters.
Following are the assumptions, we make on the error component and the parameters of model (2):
Assumption P1.
X(t) is a sequence of i.i.d. random variables with mean zero, variance and finite fourth order moment.
Assumption P2.
is an interior point of the parameter space where is a positive real number and
Theorem P1.
Proof.
This proof follows from Theorem 2 of Lahiri et al. [13].
∎
2.2 Multiplecomponent 1D Chirp Model
Now we consider a 1D chirp model with multiple components, mathematically expressed as follows:
Here, s, s are the amplitudes, s are the frequencies and are the frequency rates, the parameters that characterise the observed signal and is the random noise component.
Lahiri et al. [13] suggested a sequential procedure to estimate the unknown parameters of the above model. We discuss in brief, the proposed sequential procedure and then state some of the asymptotic results they established, germane to our work.

Step 1: The first step of the sequential method is to estimate the nonlinear parameters of the first component of the model, and , say and by minimising the following reduced error sum of squares:
with respect to and simultaneously.

Step 2: Then the first component linear parameter estimates, and are obtained using the separable linear regression of Richards [1] as follows:

Step 3: Once we have the estimates of the first component parameters, we take out its effect from the original signal and obtain a new data vector as follows:

Step 4: Then the estimates of the second component parameters are obtained by using the new data vector and following the same procedure and the process is repeated times.
Under the Assumption P1 on the error random variables and the following assumption on the parameters:
Assumption P3.
is an interior point of , for all and the frequencies and the frequency rates are such that .
Assumption P4.
s and s satisfy the following relationship:
we have the following results.
Theorem P2.
Proof.
The proof of (8) follows along the same lines as proof of Lemma 4 of Lahiri et al. [13] and that of (9) and (10) follows from Theorem 2 of Lahiri et al. [13]. Note that Lahiri et al. [13] showed that the sequential LSEs have the same asymptotic distribution as the usual LSEs based on a famous number theory conjecture (see the reference).
∎
3 OneComponent 2D Chirp Model
In this section, we provide the methodology to obtain the proposed estimators for the parameters of a onecomponent 2D chirp model, mathematically expressed as follows:
(13) 
Here is the observed data vector and the parameters , are the amplitudes, , are the frequencies and , are the frequency rates of the signal model. As mentioned in the introduction, accounts for the noise present in the signal.
We will use the following notations: is the parameter vector,
is the true parameter vector and = is the parameter space.
3.1 Proposed Methodology
Let us consider the abovestated 2D chirp signal model with onecomponent. Suppose we fix , then (13) can be rewritten as follows:
(14) 
which represents a 1D chirp model with , as the amplitudes, as the frequency parameter and as the frequency rate parameter. Here,
Thus for each fixed , we have a 1D chirp model with the same frequency and frequency rate parameters, though different amplitudes. This 1D model corresponds to a column of the 2D data matrix.
Our aim is to estimate the nonlinear parameters and from the columns of the data matrix and one of the most reasonable estimators for this purpose are the least squares estimators. Therefore, the estimators of and can be obtained by minimising the following function:
for each . Here, is the th column of of the original data matrix, is the projection matrix on the column space of the matrix and the matrix can be obtained by replacing by in (3). This process involves minimising 2D functions corresponding to the N columns of the matrix. Thus, for computational efficiency, we propose to minimise the following function instead:
(15) 
with respect to and simultaneously and obtain and which reduces the estimation process to solving only one 2D optimisation problem. Note that since the errors are assumed to be i.i.d. replacing these functions by their sum is justifiable.
Similarly, we can obtain the estimates, and , of and , by minimising the following criterion function:
(16) 
with respect to and simultaneously. The data vector , is the th row of the data matrix, , is the projection matrix on the column space of the matrix and the matrix can be obtained by replacing by and and by and respectively in the matrix , defined in (3).
Once we have the estimates of the nonlinear parameters, we estimate the linear parameters by the usual least squares regression technique as proposed by Lahiri et al. [13]:
Here, is the observed data vector, and
(17) 
We make the following assumptions on the error component and the model parameters before we examine the asymptotic properties of the proposed estimators:
Assumption 1.
X(m,n) is a double array sequence of i.i.d. random variables with mean zero, variance and finite fourth order moment.
Assumption 2.
The true parameter vector is an interior point of the parametric space , and .
3.2 Consistency
The results obtained on the consistency of the proposed estimators are presented in the following theorems:
Theorem 1.
Proof.
See Appendix A.
∎
Theorem 2.
Proof.
This proof follows along the same lines as the proof of Theorem 1.
∎
3.3 Asymptotic distribution.
The following theorems provide the asymptotic distributions of the proposed estimators:
Proof.
See Appendix A.
∎
Proof.
This proof follows along the same lines as the proof of Theorem 3.
∎
The asymptotic distributions of and are observed to be the same as those of the corresponding LSEs. Thus, we get the same efficiency as that of the LSEs without going through the exhaustive process of actually computing the LSEs.
4 MultipleComponent 2D Chirp model
In this section, we consider the multiplcomponent 2D chirp model with number of components, with the mathematical expression of the model as given in (1). Although estimation of is an important problem, in this paper we deal with the estimation of the other important parameters characterising the observed signal, the amplitudes, the frequencies and the frequency rates, assuming to be known. We propose a sequential procedure to estimate these parameters. The main idea supporting the proposed sequential procedure is same as that behind the ones proposed by Prasad et al. [9] for a sinusoidal model and Lahiri et al. [13] and Grover et al. [17] for a chirp model the orthogonality of different regressor vectors. Along with the computationally efficiency, the sequential method provides estimators with the same rates of convergence as the LSEs.
4.1 Proposed Sequential Algorithm
The following algorithm is a simple extension of the method proposed to obtain the estimators for a onecomponent 2D model in Section 3.1:

Step 1: Compute and by minimising the following function:
with respect to and simultaneously.

Step 2: Compute and by minimising the function:
with respect to and simultaneously.

Step 3: Once the nonlinear parameters of the first component of the model are estimated, estimate the linear parameters and by the usual least squares estimation technique:
Here, is the observed data vector, and the matrix can be obtained by replacing , , and by , , and respectively in (17).

Step 4: Eliminate the effect of the first component from the original data and construct new data as follows:
(18) 
Step 5: Using the new data, estimate the parameters of the second component by following the same procedure.

Step 6: Continue this process until all the parameters are estimated.
In the following subsections, we examine the asymptotic properties of the proposed estimators under the assumptions 1, P4 and the following assumption on the parameters:
Assumption 3.
is an interior point of , for all and the frequencies , and the frequency rates , are such that and .
4.2 Consistency.
Through the following theorems, we proclaim the consistency of the proposed estimators when the number of components, is unknown.
Proof.
See Appendix B.
∎
Proof.
This proof can be obtained along the same lines as proof of Theorem 5.
∎
Theorem 7.
Proof.
This proof follows from the proof of Theorem 2.4.4 of Lahiri [13].
∎
Note that we do not know the number of components in practice. The problem of estimation of is an important problem though we have not considered it here. From the above theorem, it is clear that if the number of components of the fitted model is less than or same as the true number of components, , then the amplitude estimators converge to their true values almost surely, else if it is more than , then the amplitude estimators upto the th step converge to the true values and past that, they converge to zero almost surely. Thus, this result can be used a criterion to estimate the number . However, this might not work in low signal to noise ratio scenarios.
4.3 Asymptotic distribution.
Theorem 8.
Proof.
See Appendix B.
∎
Theorem 9.
Proof.
This proof follows along the same lines as the proof of Theorem 8.
∎
5 Numerical Experiments and Simulated Data Analysis
5.1 Numerical Experiments
We perform simulations to examine the performance of the proposed estimators. We consider the following two cases:

Case I: When the data are generated from a onecomponent model (13), with the following set of parameters:
, , , , and . 
Case II: When the data are generated from a two components model (1), with the following set of parameters:
, , , , and , , , , , and .
The noise used in the simulations is generated from Gaussian distribution with mean 0 and variance . Also, different values of the error variance, and sample sizes, and are considered. We estimate the parameters using the proposed estimation technique as well as the least squares estimation technique for Case I and for Case II, the proposed sequential technique and the sequential least squares technique proposed by Lahiri [13] are employed for comparison. For each case, the procedure is replicated 1000 times and the average values of the estimates, the average biases and the mean square errors (MSEs) are reported. The collation of the MSEs and the theoretical asymptotic variances (Avar) exhibits the efficacy of the proposed estimation method.
5.1.1 Onecomponent simulation results
In Table 1Table 4, the results obtained through simulations for Case I are presented. It is observed that as and increase, the average estimates get closer to the true values, the average biases decrease and the MSEs decrease as well, thus verifying consistency of the proposed estimates. Also, the biases and the MSEs of both types of estimates increase as the error variance increases. The MSEs of the proposed estimators are of the same order as those of the LSEs and thus are wellmatched with the corresponding asymptotic variances.
Parameters  

True values  1.5  0.5  2.5  0.75  1.5  0.5  2.5  0.75  
Proposed estimators  Usual LSEs  
0.10  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  3.26e05  1.23e06  2.37e05  1.26e06  2.86e05  1.07e06  2.13e05  1.17e06  
MSE  9.01e07  1.21e09  8.34e07  1.14e09  8.75e07  1.19e09  8.02e07  1.10e09  
Avar  7.56e07  1.13e09  7.56e07  1.13e09  7.56e07  1.13e09  7.56e07  1.13e09  
0.50  Avg  1.4997  0.5000  2.4999  0.7500  1.4998  0.5000  2.5000  0.7500 
Bias  2.78e04  1.05e05  6.85e05  3.20e06  2.01e04  7.90e06  7.23e06  8.39e07  
MSE  2.37e05  3.18e08  2.17e05  3.11e08  2.17e05  2.97e08  2.07e05  2.95e08  
Avar  1.89e05  2.84e08  1.89e05  2.84e08  1.89e05  2.84e08  1.89e05  2.84e08  
1.00  Avg  1.5004  0.5000  2.4998  0.7500  1.5004  0.5000  2.4998  0.7500 
Bias  4.11e04  1.77e05  2.24e04  8.00e06  3.63e04  1.60e05  2.16e04  7.57e06  
MSE  9.54e05  1.22e07  8.92e05  1.25e07  8.92e05  1.17e07  8.48e05  1.18e07  
Avar  7.56e05  1.13e07  7.56e05  1.13e07  7.56e05  1.13e07  7.56e05  1.13e07 
Parameters  

True values  1.5  0.5  2.5  0.75  1.5  0.5  2.5  0.75  
Proposed estimators  Usual LSEs  
0.10  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  5.53e06  1.25e07  2.33e06  4.65e08  2.51e06  7.96e08  1.89e06  2.57e08  
MSE  4.88e08  1.83e11  5.09e08  1.88e11  4.14e08  1.54e11  4.65e08  1.72e11  
Avar  4.73e08  1.77e11  4.73e08  1.77e11  4.73e08  1.77e11  4.73e08  1.77e11  
0.50  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  3.57e05  4.91e07  4.47e05  7.83e07  2.56e06  2.61e07  4.19e05  6.84e07  
MSE  1.35e06  4.93e10  1.31e06  4.78e10  1.16e06  4.18e10  1.18e06  4.34e10  
Avar  1.18e06  4.43e10  1.18e06  4.43e10  1.18e06  4.43e10  1.18e06  4.43e10  
1.00  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  2.11e05  2.41e07  2.42e05  2.35e07  5.55e06  1.92e09  2.37e05  2.45e07  
MSE  5.36e06  1.92e09  5.03e06  1.77e09  4.38e06  1.56e09  4.53e06  1.60e09  
Avar  4.73e06  1.77e09  4.73e06  1.77e09  4.73e06  1.77e09  4.73e06  1.77e09 
Parameters  

True values  1.5  0.5  2.5  0.75  1.5  0.5  2.5  0.75  
Proposed estimators  Usual LSEs  
0.10  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  5.61e06  5.15e08  9.53e07  7.95e10  5.73e06  5.26e08  2.45e07  1.47e08  
MSE  9.77e09  1.60e12  1.02e08  1.65e12  9.48e09  1.56e12  9.10e09  1.48e12  
Avar  9.34e09  1.56e12  9.34e09  1.56e12  9.34e09  1.56e12  9.34e09  1.56e12  
0.50  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  3.55e05  3.80e07  1.73e06  1.05e07  3.45e05  3.63e07  4.93e06  1.46e07  
MSE  2.45e07  4.00e11  2.39e07  3.86e11  2.01e07  3.29e11  1.79e07  2.96e11  
Avar  2.33e07  3.89e11  2.33e07  3.89e11  2.33e07  3.89e11  2.33e07  3.89e11  
1.00  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  4.93e06  7.23e08  4.93e05  6.60e07  1.67e05  2.28e07  2.78e05  3.89e07  
MSE  1.01e06  1.67e10  1.06e06  1.74e10  8.29e07  1.39e10  7.77e07  1.31e10  
Avar  9.34e07  1.56e10  9.34e07  1.56e10  9.34e07  1.56e10  9.34e07  1.56e10 
Parameters  

True values  1.5  0.5  2.5  0.75  1.5  0.5  2.5  0.75  
Proposed estimators  Usual LSEs  
0.10  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  5.76e07  5.65e10  6.02e07  2.59e09  6.66e07  1.33e10  4.29e07  3.89e09  
MSE  3.23e09  2.92e13  3.00e09  2.85e13  2.47e09  2.28e13  2.87e09  2.74e13  
Avar  2.95e09  2.77e13  2.95e09  2.77e13  2.95e09  2.77e13  2.95e09  2.77e13  
0.50  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  5.41e06  5.31e08  1.12e05  1.10e07  1.07e06  1.56e08  1.38e05  1.34e07  
MSE  8.11e08  7.28e12  7.52e08  6.83e12  5.41e08  5.03e12  5.54e08  5.18e12  
Avar  7.38e08  6.92e12  7.38e08  6.92e12  7.38e08  6.92e12  7.38e08  6.92e12  
1.00  Avg  1.5000  0.5000  2.5000  0.7500  1.5000  0.5000  2.5000  0.7500 
Bias  1.98e05  1.63e07  1.43e05  8.73e08  8.54e06  6.29e08  1.12e05  5.96e08  
MSE  2.83e07  2.56e11  2.96e07  2.75e11  1.91e07  1.77e11  2.07e07  1.97e11  
Avar  2.95e07  2.77e11  2.95e07  2.77e11  2.95e07  2.77e11  2.95e07  2.77e11 
5.1.2 Two component simulation results
We present the simulation results for Case II in Table 5Table 8. From these tables, it is evident that the average estimates are quite close to the true values. The results also verify consistency of the proposed sequential estimators. It is also observed that the MSEs of the parameter estimates of the first component are mostly of the same order as the corresponding theoretical variances while those of the second component have exactly the same order as the corresponding asymptotic variances.
Proposed sequential estimates  Sequential LSEs  
0.10  First Component  Parameters  
True values  2.1  0.1  1.25  0.25  2.1  0.1  1.25  0.25  
Average  2.1016  0.0998  1.2614  0.2500  2.1031  0.0998  1.2565  0.2500  
Bias  1.63e03  1.81e04  1.14e02  4.71e05  3.05e03  1.76e04  6.46e03  3.94e05  
MSE  2.92e06  3.30e08  1.31e04  2.60e09  9.59e06  3.12e08  4.20e05  1.91e09  
AVar  2.40e07  3.60e10  2.40e07  3.60e10  2.40e07  3.60e10  2.40e07  3.60e10  
Second Component  Parameters  
True values  1.5  0.5  1.75  0.75  1.5  0.5  1.75  0.75  
Average  1.5018  0.5000  1.7520  0.7499  1.5017  0.5000  1.7510  0.7500  
Bias  1.83e03  1.92e05  1.98e03  6.41e05  1.68e03  2.19e05  1.03e03  2.95e05  
MSE  4.19e06  1.54e09  4.96e06  5.53e09  3.61e06  1.59e09  1.97e06  2.12e09  
AVar  7.56e07  1.13e09  7.56e07  1.13e09  7.56e07  1.13e09  7.56e07  1.13e09  
0.50  First Component  Parameters  
True values  2.1  0.1  1.25  0.25  2.1  0.1  1.25  0.25  
Average  2.1017  0.0998  1.2613  0.2500  2.1031  0.0998  1.2563  0.2500  
Bias  1.71e03  1.83e04  1.13e02  4.38e05  3.13e03  1.78e04  6.32e03  4.34e05  
MSE  8.92e06  4.14e08  1.35e04  1.18e08  1.60e05  3.98e08  4.61e05  1.07e08  
AVar  5.99e06  8.99e09  5.99e06  8.99e09  5.99e06  8.99e09  5.99e06  8.99e09  
Second Component  Parameters  
True values  1.5  0.5  1.75  0.75  1.5  0.5  1.75  0.75  
Average  1.5017  0.5000  1.7522  0.7499  1.5015  0.5000  1.7512  0.7500  
Bias  1.69e03  1.05e05  2.16e03  6.94e05  1.51e03  1.25e05  1.17e03  3.35e05  
MSE  2.54e05  3.37e08  3.15e05  4.22e08  2.35e05  3.18e08  2.40e05  3.31e08  
AVar  1.89e05  2.84e08  1.89e05  2.84e08  1.89e05  2.84e08  1.89e05  2.84e08  
1.00  First Component  Parameters  
True values  2.1  0.1  1.25  0.25  2.1  0.1  1.25  0.25  
Average  2.1015  0.0998  1.2616  0.2499  2.1029  0.0998  1.2567  0.2500  
Bias  1.54e03  1.79e04  1.16e02  5.63e05  2.93e03  1.72e04  6.67e03  3.07e05  
MSE  2.98e05  6.77e08  1.65e04  4.44e08  3.72e05  6.70e08  6.97e05  3.66e08  
AVar  2.40e05  3.60e08  2.40e05  3.60e08  2.40e05  3.60e08  2.40e05  3.60e08  
Second Component  Parameters  
True values  1.5  0.5  1.75  0.75  1.5  0.5  1.75  0.75  
Average  1.5018  0.5000  1.7516  0.7499  1.5018  0.5000  1.7507  0.7500  
Bias  1.79e03  1.57e05  1.64e03  5.47e05  1.75e03  2.25e05  7.18e04  2.07e05  
MSE  9.84e05  1.41e07  1.20e04  1.66e07  9.40e05  1.35e07  1.01e04  1.40e07  
AVar  7.56e05  1.13e07  7.56e05  1.13e07  7.56e05  1.13e07  7.56e05  1.13e07 
Proposed sequential estimates  Sequential LSEs  
0.10  First Component  Parameters  
True values  2.1  0.1  1.25  0.25  2.1  0.1  1.25  0.25  
Average  2.1006  0.1000  1.2567  0.2499  2.1011  0.1000  1.2572  0.2499  
Bias  6.07e04  8.61e06  6.70e03  1.11e04  1.07e03  1.11e05  7.20e03  1.22e04  
MSE  3.85e07  8.02e11  4.49e05  1.23e08  1.15e06  1.29e10  5.18e05  1.49e08  
AVar  1.50e08  5.62e12  1.50e08  5.62e12  1.50e08  5.62e12  1.50e08  5.62e12  
Second Component  Parameters  
True values  1.5  0.5  1.75  0.75  1.5  0.5  1.75  0.75  
Average  1.5006  0.5000  1.7506  0.7500  1.5008  0.5000  1.7507  0.7500  
Bias  6.09e04  1.05e05  5.56e04  9.98e06  7.51e04  1.36e05  6.83e04  1.20e05  
MSE  4.23e07  1.30e10  3.60e07  1.18e10  6.13e07  2.03e10  5.17e07  1.61e10  
AVar  4.73e08  1.77e11  4.73e08  1.77e11  4.73e08  1.77e11  4.73e08  1.77e11  
0.50  First Component  Parameters 