Efficient Algorithms for Positive Semi-Definite Total Least Squares Problems, Minimum Rank Problem and Correlation Matrix Computation

# Efficient Algorithms for Positive Semi-Definite Total Least Squares Problems, Minimum Rank Problem and Correlation Matrix Computation

Negin Bagherpour Faculty of Mathematical Sciences, Sharif University of Technology, Tehran, Iran, (nbagherpour@mehr.sharif.ir).    Nezam Mahdavi-Amiri Faculty of Mathematical Sciences, Sharif University of Technology, Tehran, Iran, (nezamm@sharif.ir).
###### Abstract

We have recently presented a method to solve an overdetermined linear system of equations with multiple right hand side vectors, where the unknown matrix is to be symmetric and positive definite. The coefficient and the right hand side matrices are respectively named data and target matrices. A more complicated problem is encountered when the unknown matrix is to be positive semi-definite. The problem arises in estimating the compliance matrix to model deformable structures and approximating correlation and covariance matrices in financial modeling. Several methods have been proposed for solving such problems assuming that the data matrix is unrealistically error free. Here, considering error in measured data and target matrices, we propose a new approach to solve a positive semi-definite constrained total least squares problem. We first consider solving the problem when the rank of the unknown matrix is known, by defining a new error formulation for the positive semi-definite total least squares problem. Minimization of our newly defined error consists of an optimization problem on the Stiefel manifold. To solve the optimization problem, in each iteration a linear operator subproblem arises for which we propose three different iterative methods. We prove quadratic convergence of our proposed approach. We then describe how to generalize our proposed method to solve the general positive semi-definite total least squares problem. We further apply the proposed approach to solve the minimum rank problem and the problem of computing correlation matrix. Comparative numerical results show the efficiency of our proposed algorithms. In solving positive semi-definite total least squares problems, we find that in most cases the linear operator equation is solved faster and turns to be more accurate using the GMRES method. Also, comparison of the results obtained by our algorithm with the ones due to two other methods, the interior point method and a MATLAB routine for solving a quadratic programming problem with semi-definite constraint based on a path following algorithm, confirms the efficiency of our approach. Numerical test results also show that our approach for computing a correlation matrix leads to smaller standard deviations of error in the target matrix. Finally, the Dolan-Moré performance profiles are shown to summarize our comparative study.

Key words. Total least squares, positive semi-definite constraints, deformable structures, correlation matrix

AMS subject classifications. 65F05, 65F20, 49M05

## 1 Introduction

In several physical problems, such as estimation of the mass inertia matrix in the design of controllers for solid structures and robots, an overdetermined linear system of equations with multiple right hand side vectors arises with the constraint that the unknown matrix be symmetric and positive definite; see, e.g., [4, 7, 23]. A method for solving such a problem has been proposed in [24]. There are also physical contexts, such as modeling a deformable structure [5, 18] and computing the correlation matrix in finance or insurance/reinsurance industries [9, 12, 22, 27], where a symmetric positive semi-definite solution of an over determined linear system of equations needs to be computed or equivalently the problem

 DX≃T \hb@xt@.01(1.1)

needs to be solved, where , with , are given and , a symmetric positive semi-definite matrix, is to be computed as a solution. In some special applications, the data matrix has a simple structure, which may be taken into consideration for efficiently organized computations. Computing the correlation matrix in finance is such an example where the data matrix is the identity matrix; see, e.g., [27].

Unlike the positive definite total least squares problem, here the unknown matrix is singular and thus our previously defined error formulation in [24] is no more applicable. We need to formulate the error in the measured data and target matrices as a function of the unknown matrix but not its inverse.

A number of least squares formulations have been proposed for the physical problems, which may be classified as ordinary and total least squares problems. Unlike the ordinary formulation, in a total least squares formulation both data and target matrices are assumed to contain error. Also, single or multiple right hand sides may arise. In [24], ordinary and total least squares formulations with single or multiple right hand sides have been considered. For detailed analysis of total least squares, see [2, 8, 17].

Here, we consider an specific case of the total least squares problem with multiple right hand side vectors. Our goal is to compute a symmetric positive semi-definite solution of the overdetermined system of equations , where both matrices and may contain error. Several approaches have been proposed for this problem, commonly considering the ordinary least squares formulation and minimizing the error over all symmetric positive semi-definite matrices, where is the Frobenious norm. Larson [6] discussed a method for computing a symmetric solution to an overdetermined linear system of equations based on solving the corresponding normal system of equations. Krislock [18] proposed an interior point method for solving a variety of least squares problems with positive definiteness constraint. Woodgate [15] described a new algorithm for solving a similar problem in which a symmetric positive semi-definite matrix is computed to minimize , with known and . In [14], Toh introduced a path following algorithm for solving a positive semi-definite quadratic optimization problem. Later in 2009, he posted a MATLAB package for solving such a problem; see [30]. Hu [3] gave a quadratic programming approach to solve a least squares problem with a symmetric positive definite unknown matrix. In his method, the upper and lower bounds for the entries of the target matrix can be given as extra constraints. In real measurements, however, both the data and target matrices may contain error. Thus, to be practical, a total least squares formulation seems to be appropriate. Here, we define a new error function to consider error in both data and target matrices and propose an iterative algorithm to minimize the defined error.

If the goal is to compute the correlation matrix, the mathematical problem is a little different. Computing the correlation matrix is very important in financial modeling. It is applicable for example in obtaining a quadratic model for an economical system and even in reverse engineering for extreme scenario stress testing [25]. In this case, the data matrix is the identity and a large number of linear constraints are to be satisfied. Sun [12] presented an algorithm for computing the correlation matrix. Rebonato [9] and Werner [22] also discussed solving the same problem. We will see later that the minimum rank problem can also be solved by applying our proposed algorithm. This problem appears in the literature in diverse areas including system identification and control, Euclidean embedding, and collaborative filtering; see [10, 16, 31]. In a minimum rank problem, the goal is to find a positive semi-definite solution with the minimum possible rank to an overdetermined linear system of equations.

The remainder of our work is organized as follows. In Section 2, we define a new error function for solving a positive semi-definite total least squares problem with a fixed rank. A method for solving the resulting optimization problem is presented in Section 3. Also, a discussion on solving the positive semi-definite total least squares problem (with arbitrary rank) is given in Section 3. In Section 4, we introduce two slightly different problems and discuss how to solve them based on the proposed method in Section 3. These two problems are: the minimum rank problem and computing the correlation matrix. Comparative computational results are given in Section 5. Section 6 gives our conclusion.

## 2 Problem Formulation

Available methods for solving a positive semi-definite least squares problem consider an ordinary least squares formulation; see, e.g., [14, 18]. A practically useful total error formulation was introduced in [24] for a positive definite total least squares problem. Based on this formulation, the solution of the optimization problem

 minX≻0tr(DX−T)T(D−TX−1) \hb@xt@.01(2.1)

is a solution of a corresponding positive definite total least squares problem, where is symmetric and by , we mean is positive definite. The error formulation in [24] not being suitable here, we first motivate and present a new error formulation for the positive semi-definite total least squares case.

In (LABEL:2), the entries of and represent the errors in and , respectively. Here, we need to represent the error in independent of . Before discussing how to solve the positive semi-definite total least squares problem, we consider the newly noted problem, positive semi-definite total least squares problem with a given rank, , of the unknown matrix (R-PSDTLS). In Section 3, we outline an algorithm for solving R-PSDTLS and discuss how to solve the positive semi-definite total least squares problem applying the proposed algorithm.

The error in is supposed to be the difference between the real value of and the predicted value for obtained by . To compute the predicted value for , we use the general least squares solution of the system . Considering the block form and , where , for , we have , for . The general solution to such a linear system has the form

 di=X†ti+ni, \hb@xt@.01(2.2)

where is the pseudo-inverse of and is an arbitrary vector in the null space of [21]. A straight choice for is which results in and . We later consider the suitable choices for which minimizes . To compute from (LABEL:3), the spectral decomposition can be applied.

Result. (Spectral decomposition) [21] All eigenvalues of a symmetric matrix, , are real and there are mutually orthonormal vectors representing the corresponding eigenvectors. Thus, there exist an orthonormal matrix with columns being the eigenvectors of and a diagonal matrix containing the eigenvalues such that . Suppose that and are respectively formed by ordering the diagonal elements of non-increasingly and rearranging the corresponding columns of . The ordered spectral decomposition of has the form . Also, if is positive semi-definite with , then of its eigenvalues are positive and the rest are zero, and thus we can set , where is diagonal and nonsingular. Hence, the ordered spectral decomposition of a symmetric positive semi-definite matrix is , with . Considering the block form , we get . Moreover, the columns of matrices and respectively form a basis for the range and null space of .

Now, making use of the spectral decomposition of in (LABEL:3), we have

 di=X†ti+Un−rzi,

where , for , are arbitrary vectors, and

 DT=X†TT+Un−r(z1⋯zm).

Thus, the predicted value for is

 Dp=TX†+ZUTn−r, \hb@xt@.01(2.3)

with , arbitrary, and the error in is equal to . A reasonable choice for in this formulation would be the one minimizing the norm of , which is the solution of the optimization problem

 minZ∥F−ZUTn−r∥2F, \hb@xt@.01(2.4)

where and is the Frobenius norm. Solving (LABEL:6) results in [21]

 Z∗=FUn−r=(D−TX†)Un−r. \hb@xt@.01(2.5)

Substituting (LABEL:7) in (LABEL:5), we get

 ΔD = D−(TX†+Z∗UTn−r)=(D−TX†)−(D−TX†)Un−rUTn−r \hb@xt@.01(2.6) = (D−TX†)(I−Un−rUTn−r).

Using along with (LABEL:8), we get

 ΔD=(D−TX†)UrUTr.

Based on the above discussion, and represent the error in and respectively. Thus, to solve a rank positive semi-definite total least squares problem, it is appropriate to minimize the error

 E=∑mi=1∑nj=1ΔDijΔTij=tr(ΔTTΔD),

with standing for trace of a matrix. Consequently, the optimization problem

 min tr(ΔTTΔD) s.t. X⪰0 \operator@fontrank(X)=r

needs to be solved, where is symmetric and by , we mean is positive semi-definite. We can rewrite the optimization problem using the spectral decomposition of and substituting and by and respectively. Considering well-known properties of the trace operator [21] and the above formulation for and , we get

 E = tr(ΔTTΔD)=tr(DX−T)T(D−TX†)UrUTr \hb@xt@.01(2.8) = tr(UrS2UTrDT−TT)(D−TUrS−2UTr)UrUTr = tr(S2UTrDT−UTrTT)(DUr−TUrS−2) = tr(UTrDTDUrS2−UTrTTDUr−UTrTTDUr+UTrTTTUrS−2).

Letting , and , problem (LABEL:10) is then equivalent to

 minY,Str(YTAYS2−YTCY+YTBYS−2), \hb@xt@.01(2.9)

where satisfies and is a nonsingular diagonal matrix.

The Lagrangian function corresponding to the constrained optimization problem

 min f(X) s.t. g(X)=0

is , where the Lagrangian coefficient vector corresponds to the constraint vector . Necessary conditions for a solution, known as Karush-Kahn-Ticker conditions, is as well as , which gives ; for KKT conditions, see [19]. Thus, is the Lagrangian function for the optimization problem (LABEL:12).

###### Lemma 2.1

An appropriate characteristic of the error formulation proposed by

 E=tr(YTAYS2−YTCY+YTBYS−2), \hb@xt@.01(2.10)

is that its value is nonnegative and it is equal to zero if and only if .

Proof. It is clear that if , then . Assuming , from (LABEL:66) we have

 E=tr(DYS−TYS−1)T(DYS−TYS−1)=0,

which holds if and only if

 DYS=TYS−1.

Multiplying both sides from right by , we get

 DYS2YT=TYYT,

or equivalently

 DX=TX†X. \hb@xt@.01(2.11)

If (LABEL:67) is satisfied, then we have ; hence, .

## 3 Mathematical Solution

Here, we discuss how to solve (LABEL:12). We also study the computational complexity and the convergence properties of our proposed algorithm.

### 3.1 Solving Rank r Positive Semi-definite Total Least Squares Problem

We are to propose an algorithm for solving (LABEL:12). More precisely, a nonsingular diagonal matrix and a matrix need to be computed to minimize

 E=f(Y,S)=tr(YTAYS2−YTCY+YTBYS−2),

where is the Stiefel manifold [1]:

 Vr(Rn)={A∈Rn×r;ATA=I}.

In the following lemma, we show that the optimization problem (LABEL:12) is strictly convex under a weak assumption on the data and target matrices. We also make use of the well-known properties of convexity to propose our algorithm.

###### Lemma 3.1

The function is always convex and it is strictly convex on the set , where .

Proof. The key point of the proof is to reformulate as

 f(Y,S) = tr(YTAYS2−YTCY+YTBYS−2) = ∑ri=1s2iyTiAyi−yTiCyi+1s2iyTiByi = ∑ri=1yTi(siD−1siT)T(siD−1siT)yi.

Thus, is always convex and it is strictly convex if and only if
, for , is positive definite which holds if and only if , for , has full column rank.

Note. Since the function is strictly convex and the set is convex, a point satisfying the Karush–Kuhn–Tucker (KKT) optimality conditions is the unique solution of problem (LABEL:12); for KKT conditions, see [19].

Next, in the following theorem we derive the KKT optimality conditions for problem (LABEL:12).

###### Theorem 3.2

If is the solution of problem (LABEL:12), then it satisfies

 si=(yiTByiyiTAyi)14, \hb@xt@.01(3.1)

where is the th column of .

Proof. The KKT necessary conditions for (LABEL:12) are obtained by setting . Thus, if forms a solution for (LABEL:12) with , then we must have , for . To simplify the computation of , we can reformulate using the definition of the trace operator. Let and . We have

 tr(G)=∑ri=1Gii=∑ri=1s2iyiTAyi

and

 tr(H)=∑ri=1ΔHii=∑ri=11s2iyiTByi.

So, is equal to

 L(Y,S,Ω) = ∑ri=1s2iyiTAyi+∑ri=11s2iyiTByi − tr(YTCY−Ω(YTY−I)).

Now, from (LABEL:15) we have

and can be computed by .

Considering the above discussion, in each iteration of our proposed algorithm we need to
(1) compute one term of the sequence converging to the minimizer of the
error , and
(2) compute the diagonal elements of from (LABEL:27).

Edelman [1] introduced two methods for solving optimization problems on Stiefel manifolds: the Newton method and conjugate gradient method on the Stiefel manifold. Here, we adaptively use the Newton approach to develop an algorithm for solving (LABEL:12). In each iteration of our proposed algorithm, a Newton step is computed from the Newton method on the Stiefel manifold and then the diagonal elements of , , are updated by (LABEL:27). We show in Section 3.2 that our proposed algorithm converges to the unique solution of (LABEL:12) at least quadratically. Also, we discuss its computational complexity in Section 3.2.

We are now ready to outline the steps of our proposed algorithm.

Algorithm 1. Solving rank positive semi-definite total least squares problem using the Newton method on Stiefel manifold (R-PSDTLS).
- and are upper bounds for relative and absolute error, respectively taken to be close to the machine (or user’s) unit roundoff error and machine (or user’s) zero.
(1) Let , and .
(2) Choose such that .
Repeat
(3.1) Let .
(3.2) Compute the matrix such that and let

 G=FY−YFTYY.

(3.3) To compute , solve the linear system of equations

 FYY(Δ)−Yskew(FTYΔ)−skew(ΔFTY)Y−12(I−YYT)ΔYTFTY=−G,

where

 FYY(Δ)=AΔS2−YS2ΔTAY−CΔ+YΔTCY+BΔS−2−YS−2ΔTBY

and .
(3.4) Move from in direction to using , where

 (I−YYT)Δ=QR

is the compact factorization of , and and are:

 (MN)=exp((K−RTR0)),

with .
(3.5) Compute . Let .
until .
(4) Let and .

Note. The linear equation

 FYY(Δ)−Yskew(FTYΔ)−skew(ΔFTY)Y−12(I−YYT)ΔYTFTY=−G \hb@xt@.01(3.3)

may be solved by various methods including conjugate gradient and GMRES [20]. Another possible method is to convert the linear operator appearing on the left side of (LABEL:17) to an linear system of equations. In Section 5, we present the numerical results obtained by using these three methods and compare the respective obtained accuracies and computing times.

### 3.2 Solving Semi-definite Total Least Squares Problem

In Section 3.1, we outlined Algorithm 1 to solve the rank positive semi-definite total least squares problem. Here, we discuss how to solve the general positive semi-definite total least squares problem.

A positive semi-definite solution for the overdetermined linear system of equations , whose rank is not known, needs to be computed. This problem arises for example in estimation of compliance matrix of a deformable structure [18]. To solve this problem, we can apply PSDTLS for possible values of , compute the corresponding solutions , and identify the one minimizing . We will refer to this approach as PSDTLS. In Section 5.1, we report some numerical results to compare PSDTLS by two existing methods. Although our proposed method (PSDTLS) computes the minimizer of for each value of and then finds the optimal solution among , for , it takes less time to solve the problem than two other proposed methods in the literature.

### 3.3 Convergence Properties and Computing Cost

Here, we discuss convergence properties of R-PSDTLS. We cite a theorem to be used to establish the local quadratic convergence of R-PSDTLS to the unique solution of (LABEL:12). We also show that the computational complexity of every iteration of our proposed algorithm is

 N=2mn2+n3r2+n2(r2+r)+2n2r.

Moreover, we provide an upper bound for the computational complexity of our proposed approach for solving the positive semi-definite total least squares problem, PSDTLS.

###### Theorem 3.3

([11]) Newton’s method [1] applied to the function on the Stiefel manifold

 Vr(Rn)={Y∈Rn×r;YTY=I},

locally converges to the unique solution of

 minF(Y), YTY=I,

Proof. See [11].

###### Lemma 3.4

Algorithm 1 converges locally to the unique solution of problem (LABEL:12) at least quadratically.

Proof. In Algorithm 1, we have two main computations: applying Newton’s approach on Stiefel manifold to update and updating the scalars using (LABEL:27). The rate of convergence is not affected by (LABEL:27) and it is governed by Newton’s approach. Thus, considering Theorem LABEL:28, R-PSDTLS converges at least quadratically to the unique solution of (LABEL:12).

Computing Cost: Rank Positive Semi-definite Total Least Squares Problem. The computational complexity of one iteration of R-PSDTLS is given in Table LABEL:t1. The first, second and third columns respectively give the computational complexities of solving the linear problem (LABEL:17) using conjugate gradient method in the operator form (CG-O), GMRES in the operator form (GMRES-O) and conjugate gradient method after converting (LABEL:17) into a linear system of equations (CG-L); for details, see [20].

Also, the complexity of computing , and in step (1) of R-PSDTLS is . Thus, the total computational complexity of performing one iteration of R-PSDTLS is

 Nr=2mn2+n3r2+n2(r2+r)+2n2r.

As shown in Table 3.1, the computational complexity of CG-L is approximately times greater than the ones due to CG-O and GMRES-O.

Computing Cost: Positive Semi-definite Total Least Squares Problem. The computational cost for solving the positive semi-definite total least squares problem is equal to the sum of the computational costs for solving all the rank positive semi-definite total least squares problems. The GMRES algorithm for solving the operator equation (LABEL:17) is terminated after at most iterations [20]; hence, the maximum computational cost would be

 N=2mn2+∑nr=1n(n3r2+n2(r2+r)+2n2r)≃2mn2+n73+n63+n5+n42.

### 3.4 An Special Case

In some applications like rank one signal recovery, a positive semi-definite rank 1 least squares problem needs to be solved; see, e.g., [26]. The following lemma is concerned with the special case .

###### Lemma 3.5

If , then problem (LABEL:12) can be converted to a quadratic eigenvalue problem.

Proof. Reformulating the Lagrangian function for the optimization problem (LABEL:12), for the case , we have

 L(y,s,Ω)=s2yTAy+1s2yTBy−yTCy−Ω(yTy−1)).

Let and . Thus,

 L(u,v,Ω)=uTAu+vTBv−uTCv−Ω(uTv−1)).

The KKT necessary optimality conditions lead to

 2Au−Cv−Ωv=0, \hb@xt@.01(3.5) 2Bv−Cu−Ωu=0, uTv=1.

If and have full ranks, then it can be concluded from (LABEL:18) that

 (2B−12(C+ΩI)A−1(C+ΩI))v=0, \hb@xt@.01(3.6) u=12A−1(C+ΩI)v.

Note that (LABEL:19) is a quadratic eigenvalue problem which may be solved by various methods [13, 28]. This approach will be referred as R-PSDTLS.

Next, we point out two mathematical problems and describe how to solve them using R-PSDTLS.

## 4 Two Problems and Their Solutions

Two slightly different problems also arise in some context. Here, we describe these problems and show how to solve these problems making use of Algorithm 1.

() Positive semi-definite total minimum rank problem:

 min \operator@fontrank(X) s.t. tr(ΔDTΔT)

This problem arises in different contexts such as system identification and control, Euclidean embedding and collaborative filtering [10]. Both ordinary and total least squares formulations have been considered for solving the problem [10, 16]. In an ordinary formulation, the minimum possible rank of a positive semi-definite matrix needs to be computed so that the ordinary least squares error, , is less than an error bound, . In a total formulation, however, the goal is to minimize , where is a positive semi-definite matrix satisfying . To solve this problem using R-PSDTLS, our proposed total error needs to satisfy the constraint . We start from the smallest possible rank, , and solve the corresponding positive semi-definite total least squares problem. If the inequality holds for the computed , then we stop; otherwise, we increase by one and continue iteratively until satisfying the inequality . The iteration might be performed at most times. If after iterations, none of the matrices , , satisfies the inequality , then we consider with the smallest corresponding value of as the solution of the minimum rank problem. We summarize the discussion above in the following algorithm.

Algorithm 2. Solving minimum rank positive semi-definite total least squares (MRPSDTLS) problem.
(1) Let , and .
(2) Let .
(3) Apply R-PSDTLS with the input arguments , and and compute and .
If
then let and stop
Else
Let and go to (3).
EndIf

A significant characteristic of our proposed approach for solving the minimum rank problem is that for each rank the minimum value of is determined applying Algorithm 1. Hence, to solve the minimum rank problem, it is sufficient to find the minimum value of satisfying the inequality constraint.
() Computing the correlation matrix: This problem is an special case of the positive semi-definite total least squares problem. Computing a correlation matrix is equivalent to finding a positive semi-definite matrix to satisfy

 X≃C, PX≃Q.

The linear constraints and can be replaced by the overdetermined system of equations

 (IP)X=(CQ). \hb@xt@.01(4.2)

To solve the overdetermined system of equations (LABEL:29), both ordinary and total formulations have been considered [9, 12, 22, 27]. We note that (LABEL:29) is an special case of positive semi-definite total least squares problem with data and target matrices and , where is the identity matrix, and are arbitrary. Here, we make use of the PSDTLS approach described in Section 3.2 for solving (LABEL:29).

Next, we report some numerical results. We provide comparison of our proposed algorithms and some existing methods on randomly generated test problems.

## 5 Numerical Results

We made use of MATLAB 2012b in a Windows 7 machine with a 3.2 GHz CPU and a 4 GB RAM to implement our proposed algorithms and other methods. We generated random test problems with random data and target matrices. These random matrices were produced using the rand command in MATLAB. The command generates an matrix , with uniformly distributed random entries in the interval . Using the linear transformation , the matrix with random entries in the interval is generated.

The numerical results are presented in two parts. Section 5.1 represents the numerical results corresponding to the rank positive semi-definite least squares problem and the positive semi-definite total least squares problem. The numerical results for the two problems mentioned in Section 4 are reported in Section 5.2. The abbreviated names of problems and the corresponding inputs and outputs are listed in tables LABEL:t2 and LABEL:t3.

For a given input size, ten random inputs are generated and the average value of outputs on the ten problems are reported. In Section 5.1, the numerical results corresponding to R-PSDTLS and the one due to solving the linear problem (LABEL:17) by each of the three possible methods are presented. We refer to R-PSDTLS using conjugate gradient method in the operator form to solve the linear problem as R-PSDTLS-CG-O, R-PSDTLS using conjugate gradient method after converting the linear problem to a linear system of equations as R-PSDTLS-CG-L and R-PSDTLS using GMRES in the operator form to solve the linear problem as R-PSDTLS-GMRES-O. Considering these numerical results, we can make the following observations:

(1) R-PSDTLS-CG-L generates solutions for which the orthogonality constraint, , is exactly satisfied; however, due to the high computing cost it is not practical for large problems.
(2) R-PSDTLS-GMRES-O computes the solution in a less computing time than R-PSDTLS-CG-O and R-PSDTLS-CG-L.

Some numerical results are also reported in Section 5.1 to compare our proposed approach for solving the positive semi-definite least squares (PSDLS) problem by two existing methods, the Interior point method (PSDLS-IntP in [18]), and the path following method described by Toh (PSDLS-PFToh in [30]).

Numerical results corresponding to special problems mentioned in Section 4 are reported in Section 5.2. There, numerical results obtained by our proposed algorithm for solving the minimum rank (MR) problem (MR-PSDTLS) and two other methods are reported. These two methods are MR-Toh [31] and MR-Recht [10, 16]. Finally, the numerical results corresponding to our proposed method (CM-PSDTLS) and two other methods, CM-IntP and CM-Sun [30], for solving the correlation matrix (CM) problem are also reported in Section 5.2. In all the tables, the headings and correspond to the matrix size (the size of both data and target matrices) and gives the unknown matrix rank and the columns with headings and respectively contain the average computing time and error value.

In summary, numerical results confirm the effectiveness of PSDTLS in producing more accurate solutions with lower standard deviation values in less times for solving the rank positive semi-definite total least squares problems. The reported results show that our proposed methods solve the positive semi-definite total least squares problem and the minimum rank problem with a smaller value of error, while being more efficient. Also, the presented results confirm the efficiency of our proposed method in solving the correlation matrix problem.

### 5.1 Positive Semi-definite Total Least Squares Problem

Here, we report the numerical results for solving rank positive semi-definite total least squares problems and the general positive semi-definite total least squares problem respectively.

Rank Positive Semi-definite Total Least Squares Problem. In Table LABEL:t4, the average computing time, , and the average error value,

 E=tr(DX−T)T(D−TX†)UrUTr,

are reported for PSDTLS-CG-O. The fourth column gives norm of the error corresponding to the orthogonality constraint, .

In tables LABEL:t5 and LABEL:t6, we report the results for PSDTLS-GMRES-O and PSDTLS-CG-L respectively.

In Figure LABEL:f1, the Dolan-Moré time profile is presented to compare the computing times by PSDTLS-CG-L, PSDTLS-GMRES-O and PSDTLS-CG-L for solving the PSDLS problems. Generating enough test problems is necessary for producing an illustrative performance profile. We generated 300 test problems (50 problems for each matrix size). The presented time profile shows that PSDTLS-GMRES-O needs much less computing time than the other methods.

Positive Semi-definite Total Least Squares Problem. Here, the numerical results for solving the general positive semi-definite total least squares problems are reported. In tables LABEL:t7, LABEL:t8 and LABEL:t9, the average computing times, , and the average error values, , for solving the PSDLS problem using PSDTLS-CG-O, PSDTLS-CG-L and PSDTLS-GMRES-O are respectively reported. The third column represents the average value of .

Considering the reported results in tables LABEL:t7, LABEL:t8 and LABEL:t9, PSDTLS-CG-O and PSDTLS-GMRES-O perform approximately the same on small problems; however, for large problems, PSDTLS-GMRES-O outperforms PSDTLS-CG-O in almost all the test problems. Thus, we report the results obtained by PSDTLS-GMRES-O in comparisons with other methods.

The average computing times for solving the positive semi-definite total least squares problem using PSDTLS, PSDLS-IntP and PSDLS-PFToh are reported in Table LABEL:t10. The third column presents the values of for PSDLS-IntP and PSDLS-PFToh.

Similarly, the average error values, , are given in Table LABEL:t11 for PSDLS-IntP, PSDLS-PFToh and PSDTLS.

The corresponding Dolan-Moré time profile is shown in Figure LABEL:f2 to compare the computing times for solving the PSDLS problem. We generated 500 test problems (100 for each matrix size) to provide an illustrative time profile. The results confirm that our proposed algorithm for solving positive semi-definite least squares problem performs much faster than the other methods.

### 5.2 Special Problems

Here, we report numerical results corresponding to the special problems mentioned in Section 4. In Table LABEL:t12, the average computing times for solving the minimum rank (MR) problem are reported. The third column gives the value of error bound, , in the MR problem.

The average computing time needed by MR-PSDTLS, MR-Toh and MR-Recht and the average resulting rank, , for solving the MR problem are reported in Table LABEL:t13.

An illustrative comparison is also provided in Figure LABEL:f3 based on the corresponding Dolan-Moré time profile. We generated 500 test problems to provide the time profile. The presented Dolan-Moré time profile shows that our proposed algorithm for solving minimum rank problem needs less computing time than the other two methods.

In Table LABEL:t14, the average computing times are reported for computing the correlation matrices by our proposed method (CM-PSDTLS).

Similarly, in Table LABEL:t15 the average value of standard deviation for computing the correlation matrix is reported.

The average computing times for CM-PSDTLS, CM-IntP and CM-Sun are reported in Table LABEL:t16.

In Figure LABEL:f4, the Dolan-Moré time profile is presented to compare the needed computing times by CM-PSDTLS, CM-IntP and CM-Sun to solve the correlation matrix problem. Here, we generated 250 test problems (50 for each matrix size). The presented time profile confirms that our proposed algorithm computes a correlation matrix much faster than the other methods.

In Table LABEL:t17, the average values of the standard deviation, , for computing the correlation matrix are reported.

Considering the numerical results reported in this section, we summarize our observations:

(1) A newly defined problem, R-PSDLS, was considered and an efficient algorithm
was proposed for its solution.
(2) Although our proposed algorithm for solving the PSDLS problem, PSDTLS, ap-
plies R-PSDTLS, for , in search for the solution, it appears to be more efficient than PSDLS-IntP and PSDLS-PFToh.
(3) In contrast with other available methods, our use of total formulation for solving
the PSDLS problem to consider error in both data and target matrices turns to be practically effective to produce more meaningful results.
(4) The proposed method for solving the PSDLS problem, PSDTLS, is more efficient
than the other methods.
(5) The proposed method for solving the minimum rank problem, MR-PSDTLS, is also more efficient than MR-Toh and MR-Recht.
(6) The proposed method for computing the correlation matrix, CM-PSDTLS, shows
to be more efficient and robust in computing a correlation matrix with a lower value of standard deviation of error in as compared to CM-IntP and CM-Sun.

## 6 Concluding Remarks

We proposed a new approach to solve positive semi-definite total least squares (PSDLS) problems. Consideration of our proposed error estimate for both data and target matrices admitted a more realistic problem formulation. We first considered a newly defined given rank positive semi-definite total least squares (R-PSDTLS) problem and presented an at least quadratically convergent algorithm for its solution. Numerical results confirmed the effectiveness of our approach to compute solutions of R-PSDLS problems in less computing time than the interior point method and the path following algorithm. We then showed how to apply R-PSDTLS to solve the general PSDLS problem. Based on the reported numerical results, our method for solving the PSDLS problem also showed to be more efficient than the interior point method and the path following algorithm. An specifically effective approach was also described to solve the rank positive semi-definite total least squares problem, R-PSDTLS. In addition, we noted that R-PSDTLS can be applied to other problems arising in control and financial modeling: the minimum rank (MR) problem and correlation matrix computation. Using the Dolan-Moré performance profiles, we showed our proposed method for solving the MR problem to be more efficient than a path following algorithm and a semi-definite programming approach for solving the MR problem. Furthermore, in computing the correlation matrix, numerical results showed lower standard deviation of error as compared to the interior point method and semi-definite programming approach.

ACKNOWLEDGEMENT. The authors thank Research Council of Sharif University of Technology for supporting this work.

## References

• [1] Edelman A., Arias T. A., Smith S. T.: The Geometry of Algorithms with Orthogonality Constraints, SIAM J. Matrix Anal. Appl., 20(2), 303-353 (1998)
• [2] Golub G. H., Van Loan C. F.: An analysis of the total least squares problem, SIAM J. Numer. Anal., 17, 883-893 (1980)
• [3] Hu H.: Positive definite constrained least-squares estimation of matrices, Linear Algebra Appl, 229, 167-174 (1995)
• [4] Hu H., Olkin I.: A numerical procedure for finding the positive definite matrix closest to a patterned matrix, Statistical and Probability Letters, 12, 511-515 (1991)
• [5] Krislock N. G., Lang J., Varah J. , Pai D. K., Seidel H.: Local Compliance Estimation via Positive Semi-Deï¬nite Constrained Least Squares, IEEE Trans. Robotics and Automation 20(6), 1007â1011 (2004)
• [6] Larson H. J.: Least squares estimation of the components of a symmetric matrix, Technometrics, 8(2), 360-362 (1966)
• [7] McInroy J., Hamann J. C.: Design and control of flexure jointed hexapods, IEEE Trans. Robotics and Automation, 16(4), 372-381 (2000)
• [8] Paige C. C., Strakoš Z.: Scaled total least squares fundamentals, Numer. Math., 91, 117-146 (2000)
• [9] Rebonato R., Jãckel P.: The most general methodology to create a valid correlation matrix for risk management and option pricing purposes, J. Risk, 2, 17â27 (1999)
• [10] Recht B., Fazel M., Parrilo P. A.: Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization, SIAM Review, 52(3), 471-501 (2010)
• [11] Smith S. T.: Optimization Techniques on Riemannian Manifolds, Fields Ins. Com., 3, 113-146 (1994)
• [12] Sun D., Gao Y.: Calibrating least squares covariance matrix problems with equality and inequality constraints, SIAM. J. Matrix Anal. and Appl., 31, 1432-1457 (2009)
• [13] Tisseur F.: The Quadratic Eigenvalue Problem, SIAM Review, 43(2), 235-286 (2001)
• [14] Toh K. C.: An inexact primal-dual path-following algorithm for convex quadratic SDP, Mathematical Programming, 112, 221-254 (2007)
• [15] Woodgate K. G.: Least-squares solution of over positive semidefinite symmetric , Linear Algebra Appl., 245, 171-190 (1996)
• [16] Fazel M.: Rank Minimization with Applications, PhD Thesis, Stanford University (2002)
• [17] Huffel S. V., Vandewalle J.: The Total Least Squares Problem: Computational Aspects and Analysis, SIAM (1991)
• [18] Krislock N. G.: Numerical Solution of Semidefinite Constrained Least Squares Problems, M. Sc. Thesis, University of British Colombia (2003)
• [19] Nocedal J., Wright S. J.: Numerical Optimization, Springer, New York (1999)
• [20] Saad Y.: Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, Second Edition (2003)
• [21] Van Loan C. F., Golub G.: Matrix Computation, 4th edition, JHU Press (2012)
• [22] Werner R., Schöttle K.: Calibration of Correlation MatricesâSDP or not SDP, Technical report, Munich University of Technology, Munich (2007)
• [23] Poignet P., Gautier M.: Comparison of Weighted Least Squares and Extended Kalman Filtering Methods for Dynamic Identification of Robots, Proceedings of the IEEE Conference on Robotics and Automation, San Francisco, CA, USA, 3622-3627 (2000)
• [24] Bagherpour N., Mahdavi-Amiri N.: Direct Methods for Solving Positive Definite Total Least Squares Problems Using Orthogonal Matrix Decompositions, http://arxiv.org/, (2014)
• [25] Gauthier G., Goldberg L., Hayes M., Vannerem P.: Reverse Engineering for Extreme Scenario Stress Testing, MSCI research report, https://fp7.portals.mbs.ac.uk/Portals/59/docs/ (2010)
• [26] Hand P.: Conditions for Existence of Dual Certificates in Rank-One Semidefinite Problems, http://arxiv.org/ (2014)
• [27] Higham N. J.: Computing the nearest correlation matrix (A problem from finance), MIMS EPrint: 2006.70, http://eprints.ma.man.ac.uk/ (2006)
• [28] Liang X. ,Li R. C.: The Hyperbolic Quadratic Eigenvalue Problem, http://www.uta.edu/math/preprint/rep201401.pdf (2014)
• [29] Petersen K. B., Pedersen M. S.: The Matrix Cookbook, http://orion.uwaterloo.ca/ hwolkowi/ (2008)
• [30] Toh K. C.: QSDP version 0, beta – a MATLAB software for convex quadratic semidefinite programming, http://www.math.nus.edu.sg/ mattohkc/qsdp.html (2009)
• [31] Toh K. C., Yun S.: An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized Least Squares Problems, http://www.optimization-online.org/DB-FILE/2009/03/2268.pdf (2009)
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters