Factorization approach to structured low-rank approximation with applications The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) / ERC Grant agreement number 258581 “Structured low-rank approximation: Theory, algorithms, and applications”, the Research Foundation Flanders (FWO-Vlaanderen), the Flemish Government (Methusalem Fund, METH1), and the Belgian Federal Government (Interuniversity Attraction Poles programme VII, Dynamical Systems, Control, and Optimization). MI is an FWO Pegasus Marie Curie Fellow.

# Factorization approach to structured low-rank approximation with applications††thanks: The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) / ERC Grant agreement number 258581 “Structured low-rank approximation: Theory, algorithms, and applications”, the Research Foundation Flanders (FWO-Vlaanderen), the Flemish Government (Methusalem Fund, METH1), and the Belgian Federal Government (Interuniversity Attraction Poles programme VII, Dynamical Systems, Control, and Optimization). MI is an FWO Pegasus Marie Curie Fellow.

Mariya Ishteva22footnotemark: 2    Konstantin Usevich22footnotemark: 2    Ivan Markovsky Dept. ELEC, Vrije Universiteit Brussel, Building K, Pleinlaan 2, B-1050 Brussels, Belgium ({mariya.ishteva, konstantin.usevich, ivan.markovsky}@vub.ac.be).
###### Abstract

We consider the problem of approximating an affinely structured matrix, for example a Hankel matrix, by a low-rank matrix with the same structure. This problem occurs in system identification, signal processing and computer algebra, among others. We impose the low-rank by modeling the approximation as a product of two factors with reduced dimension. The structure of the low-rank model is enforced by introducing a penalty term in the objective function. The proposed local optimization algorithm is able to solve the weighted structured low-rank approximation problem, as well as to deal with the cases of missing or fixed elements. In contrast to approaches based on kernel representations (in linear algebraic sense), the proposed algorithm is designed to address the case of small targeted rank. We compare it to existing approaches on numerical examples of system identification, approximate greatest common divisor problem, and symmetric tensor decomposition and demonstrate its consistently good performance.

Key words. low-rank approximation, affine structure, penalty method, missing data, system identification, approximate greatest common divisor, symmetric tensor decomposition

AMS subject classifications. 15A23, 15A83, 65F99, 93B30, 37M10, 37N30, 11A05, 15A69

## 1 Introduction

Low-rank approximations are widely used in data mining, machine learning, and signal processing, as a tool for dimensionality reduction, feature extraction, and classification. In system identification, signal processing, and computer algebra, in addition to having low rank, the matrices are often structured, e.g., they have (block) Hankel, (block) Toeplitz, or (block) Sylvester structure. Motivated by this fact, in this paper, we consider the problem of approximating a given structured matrix by a matrix with the same structure and with a pre-specified reduced rank .

### 1.1 Problem formulation

Formally, we consider the following problem

 min^D∥D−^D∥2Wsubject to % rank(^D)≤r and ^D is structured, (1)

where , , and is a semi-norm on the space of matrices , induced by a positive semidefinite matrix as

 ∥D∥2W:=(vec(D))⊤Wvec(D).

Being able to deal with weights in (1) has a number of advantages in practice. First, due to sensor failure, malfunctioning of a communication channel, or simply due to unseen events, real-world data can have unknown (missing) elements. If repeating the experiments until all data are collected is not an option, for example because of high price of the experiments or high computational time, the missing data have to be approximated as well. The problem of estimating missing data is also known as the matrix completion problem and is well-studied in the case of unstructured matrices. In the case of structured matrices, however, this problem has few solutions [21]. A natural way to deal with missing elements is to introduce zeros in the weight matrix at the positions corresponding to the missing elements.

In addition, if prior knowledge is available about the importance or the correctness of each (noisy) element, this knowledge can be encoded in the weight matrix. Note also that finding the closest structured matrix to a given structured matrix with respect to the Frobenius norm can be encoded with being the identity matrix.

The structures considered in (1) are affine structures. This class of structures includes many structures of interest and contains all linear structures. Moreover, it allows us to deal with fixed elements, i.e., to keep some elements of in the approximating matrix . Unlike other approaches in the literature, we do not use infinite weights, but rather incorporate the fixed values in a special matrix.

### 1.2 Solution approaches

Existing algorithms to solve problem (1) can be classified into three groups: i) based on local optimization, ii) using relaxations, or iii) using heuristics, such as the widely used Cadzow method [5] or [34]. Relaxation methods include subspace-based methods [33, 16] and, more recently, nuclear norm based methods [18, 17, 10]. Local optimization algorithms use kernel or input/output (I/O) (also known as the structured total least squares (STLS) problem) representations of the rank constraint, as described in Table 1. Some of these algorithms cannot deal with fixed or missing elements or solve only special cases of problem (1), e.g. the case of Frobenius norm.

In this paper, we study an underrepresented point of view, namely the image representation of the rank constraint (also known as matrix factorization approach):

 rank(ˆD)≤r⟺ˆD=PL for some P∈Rm×r,L∈Rr×n.

Although this view is widely represented in the literature on (unstructured) low-rank approximation, it is underrepresented in the case of structured low-rank approximation. Imposing both low-rank and structure simultaneously with the image representation is a nontrivial problem [6]. The main difficulty comes from the fact that the structure has to be imposed on the approximation via the product of its factors and .

### 1.3 Our contribution

We propose to resolve the problem of imposing the structure via the factors by using the penalty method [26, §17.1]. The structure can be imposed by introducing a penalty term in the cost function in (1), representing the distance between the current iterate and the space of structured matrices, i.e.,

 minP,L∥D−PL∥2W+λdist(PL,its closest % structured matrix).

In this way the constrained optimization problem (1) becomes an unconstrained problem, which can be solved by an alternating projections (block coordinate descent) algorithm. To the best of our knowledge, this is the first detailed study of the matrix factorization view of structured low-rank approximation. We apply the proposed algorithm on practically relevant and nontrivial simulation examples from system identification, computer algebra (finding a common divisor of polynomials with noisy coefficients), and symmetric tensor decomposition, and demonstrate its consistently good performance. Large scale problems are not studied in the paper.

The main competitors of the proposed local optimization approach are the kernel-based algorithms, which aim at solving the same problem. In contrast to kernel-based approaches which are meant for large rank (small rank reduction ), the proposed approach is more efficient for problems with small . Moreover, for general affine structures, existing kernel approaches have restrictions on the possible values of the reduced rank [23]. With the new approach we can overcome this limitation.

Local optimization techniques need a good starting value in order to converge to an adequate local optimum. We solve a series of related subproblems with increasing penalty parameter . Due to the use of a small initial penalty parameter, the truncated SVD provides a good initial approximation for the initial subproblem. Each subsequent subproblem is initialized with the solution of the previous one, providing a good initialization for every subproblem.

Another known issue with the structured low-rank approximation problem is the possible non-existence of solution with fixed rank [6]. The proposed approach avoids this issue by requiring that the rank of the approximation is bounded from above by . This way the feasible set is closed. Then if the fixed elements are all zeros (which is the case for most common structures), the feasible set is nonempty and solution always exists. We note, however, that in the non-generic case when the solution of (1) is of lower rank, the proposed algorithm has to be modified for the convergence results to hold.

Another advantage of the proposed algorithm is its simplicity. As we show in Section 4.1, the proposed algorithm reduces to solving a sequence of least squares problems with closed form solutions. This makes it more robust to implementation issues, unlike the algorithm proposed in [6].

Last but not least, we are able to solve the weighted structured low-rank approximation problem, as well as to deal with the cases of missing elements in the data matrix or fixed elements in the structure. These “features” have great impact on the applicability of the proposed approach, as demonstrated in the numerical section.

The rest of the paper is organized as follows. In Section 2, we discuss the structure specification and how to obtain the closest structured matrix to a given unstructured matrix (orthogonal projection on the space of structured matrices). In Section 2.3, we discuss the structure parameter point of view of the main optimization problem. Our reformulations using penalty terms are proposed in Section 3. The main algorithm and its properties are discussed in Section 4. In Section 5, it is compared with existing approaches on numerical examples. In Section 6, we draw our final conclusions.

## 2 Structure specification and its use

Commonly used structures include Hankel, block Hankel (system identification, speech encoding, filter design), Toeplitz, block Toeplitz (signal processing and image enhancement), Sylvester, extended Sylvester (computer algebra), and banded matrices with fixed bandwidth, see [20, Table 1.1], [6, §2] and the references therein. These matrices have a pattern for the position of their elements. For example, in a Hankel matrix , the elements along any anti-diagonal are the same, i.e.,

for some vector , , called structure parameter vector for the matrix . Note that any (unstructured) matrix can be considered as structured matrix with structure parameters.

In this section, we first formally introduce the affine structures and then discuss the orthogonal projection on the space of structured matrices.

### 2.1 Affine structures

Formally, affine matrix structures are defined as

 S(p)=S0+np∑k=1Skpk, (2)

where , and is the number of structure parameters. We require to be minimal in the sense that

 image(S):={S(p)|p∈Rnp}

cannot be represented with less than parameters. It is convenient to define the following matrix

 S=[vec(S1)⋯vec(Snp)]∈Rmn×np, (3)

where denotes the vectorized matrix . The minimality of is equivalent to having full column rank. For simplicity, we assume that

• the matrix consists of only zeros and ones and that there is at most one nonzero element in each row of the matrix , i.e., every element of the structured matrix corresponds to (at most) one element of or is a fixed element.

This assumption is satisfied for the common structures mentioned earlier ((block) Hankel, (block) Toeplitz, etc.) and implies that and

The matrix is introduced to handle fixed elements and is independent of the values of the structure parameters. The parameter vector is a vector of true parameters and does not include elements corresponding to fixed elements in the structure specification. In many cases (for example, Sylvester matrix in Section 5.3), the fixed elements are zeros, and therefore (the structure is linear). However, we aim at dealing with the more general case of the arbitrary fixed elements.

### 2.2 Orthogonal projection on image(S)

Next we discuss the orthogonal projection of a matrix on to the space of structured matrices . This projection is used in the optimization algorithm of Section 4.

###### Lemma 1.

For a structure satisfying assumption (A), the orthogonal projection of a matrix on is given by

 PS(X):=S(S†vec(X)),whereS†:=(S⊤S)−1S⊤. (4)

The proof is given in the appendix. With some modifications, this lemma also holds for any affine For future reference, using (2), (3), and (4), we also have the following equality

 vec(PS(X))=vec(S0)+ΠSvec(X), (5)

where is the orthogonal projector on the image of

The effect of applying on a vectorized matrix is producing a structure parameter vector by averaging the elements of corresponding to the same . In particular, applying on a (vectorized) structured matrix extracts its structure parameter vector. Further explanation is provided in the appendix.

### 2.3 Parameter view of weighted structured low-rank approximation

This section relates problem (1) to an approximation problem in the parameter norm

 ∥x∥2¯¯¯¯¯W:=x⊤¯¯¯¯¯¯Wx,

where is a symmetric positive semidefinite matrix of weights. If is the identity matrix, then .

###### Lemma 2.

Problem (1) is equivalent to the following problem

 min^p∥p−^p∥2¯¯¯¯¯Wsubject to rank(S(^p))≤r, (6)

with

 ¯¯¯¯¯¯W=S⊤WS. (7)
###### Proof.

Indeed,

 ∥S(p)−S(p0)∥2W=vec(S(p)−S(p0))⊤Wvec(S(p)−S(p0))=(S(p−p0))⊤WS(p−p0)=(p−p0)⊤¯¯¯¯¯¯W(p−p0)=∥p−p0∥2¯¯¯¯¯W.

Therefore, for and related by (7), problems (6) and (1) are equivalent. ∎

In the literature, problem (6) is sometimes referred to as the main problem from which (1) is derived.

## 3 Penalized structured low-rank approximation

Each of the constraints in (1) can easily be handled separately. Approximating a matrix by a structured matrix without imposing low-rank can be performed by orthogonally projecting the matrix on the space of structured matrices (see Section 2.2). Unweighted low-rank approximation without imposing structure can be done using truncated singular value decomposition SVD [12]. However, imposing both low-rank and fixed structure on the approximation is nontrivial even in the unweighted case (when ). Likewise, due to the non-convexity of the rank constraint, the weighted low-rank approximation problem is difficult already in the unstructured case (when ) [30].

We approach the weighted structured low-rank approximation problem from a new point of view, namely by a penalty technique. We propose two novel reformulations and discuss their relation to the original problem.

The main idea is to solve a series of related simpler subproblems, the solution of each subsequent problem being forced closer to the feasible region of the main problem. One of the requirements (low-rank or structure) will always be imposed, while the other one will be satisfied only upon convergence. We have the following two choices (see Figure 1):

• penalize the structure deviation

 minP,L∥D−PL∥2W+λ∥PL−PS(PL)∥2F, (8)

where is a penalty parameter (discussed in Section 4.1.1), stands for the Frobenius norm and is defined in (4), or

• penalize the low-rank deviation

 minP,L∥D−PS(PL)∥2W+λ∥PL−PS(PL)∥2F. (9)

The choice of is discussed in Section 4.1.1. Note that for the term has to be zero and the three problems (1), (8) and (9) are equivalent. The interpretations of (8) and (9) are however different. In (8), is a matrix of low rank but it only approximately obeys the given structure. Forcing to zero forces to have the required structure as well. In (9), the iterate can be considered to be i.e., at each iteration the iterate is a structured matrix but the low-rank constraint is only approximately satisfied. Forcing to zero forces the iterate to have low-rank as well.

Since is of full rank, given in problem (6), there are many possibilities to choose satisfying (7), such that problems (1) and (6) are equivalent. However, the following holds true.

###### Remark 1.

If satisfies (7), problem (9) is independent of the choice of and can be formulated using in the following way

 minP,L∥p−S†vec(PL)∥2¯¯¯¯¯W+λ∥PL−PS(PL)∥2F. (10)

Because of this reason, we will focus on problem formulation (9) and its equivalent representation (10). For a particular choice (which satisfies (7)), problem (8) is equivalent to (10).

###### Remark 2 (Existence of solution).

A known issue with the structured low-rank approximation problem is the possible non-existence of solution with fixed rank [6]. The proposed approach avoids this issue by requiring that the rank of the approximation is bounded from above by . This way the feasible set is closed. Then if (which is the case for most common structures), then the feasible set is nonempty and solution always exists. We note, however, that in the non-generic case when the solution of (1) is of lower rank, the proposed algorithm has to be modified for the convergence results to hold.

## 4 The proposed algorithm

In this section, we propose an algorithm in the framework of the penalty methods. We first discuss how the minimization problem (10) can be solved for a fixed value of the penalty parameter and then present the algorithmic and computational details related to the proposed algorithm.

### 4.1 Description of the algorithm

The main idea is to solve the minimization problem (10) by alternatingly improving the approximations of , for fixed ,

 minL∥p−S†% vec(PL)∥2¯¯¯¯¯W+λ∥PL−PS(PL)∥2F (11)

and of , for fixed ,

 minP∥p−S†% vec(PL)∥2¯¯¯¯¯W+λ∥PL−PS(PL)∥2F. (12)

Let be the identity matrix and let ’’ be the Kronecker product

###### Lemma 3.

Problems (11) and (12) are equivalent to the following least squares problems

 (13)

for an with and being the orthogonal projector on the left kernel of .

The proof is given in the appendix.

Both reformulations in Lemma 3 are least squares problems and can be solved in closed form. For fixed , we propose an algorithm in the framework of alternating least squares and block coordinate descent, namely we alternatingly improve the approximations of and of by solving the least squares problems in (13). We discuss the update strategy for in Section 4.1.1. The choice of initial approximation for and the stopping criteria are discussed in Section 4.1.2. The summary of the proposed algorithm is presented in Algorithm 1.

Due to the simplicity of Algorithm 1, dealing with weights, missing elements and structures with fixed elements is straightforward. The weight matrix and the matrix with fixed elements are readily introduced in (13) and thus also in Algorithm 1. Dealing with missing elements is realized by introducing zeros in the weight matrix at the positions corresponding to the missing elements. Numerical examples are introduced in Section 5.

#### 4.1.1 Parameter λ

In theory, if we fix , then (10) is the exact structured low-rank approximation problem. In practice, we may fix to a “large enough” value and the solution is only approximately a structured matrix. The higher the value of , the better the structure constraint is satisfied; however, too large values may lead to numerical issues.

Alternatively, adaptive schemes for updating are also possible. We can start from a small value and increase it with each iteration or set of iterations. This way we allow the algorithm to move to a “good region” first and then impose more strictly the structure constraint [26]. The following strategy has been proposed in [26, §17.1]: if solving the previous subproblem was expensive, increase only modestly, e.g., . If solving the previous subproblem was cheap, increase more ambitiously,

#### 4.1.2 Initial guess and stopping criterion

Let be the truncated SVD of the given matrix (). We initialize by . For small , the second term in the objective function is also small. In particular, for , problem (8) is exactly the (unstructured) low-rank approximation problem, for which truncated SVD provides a globally optimal solution (in the unweighted case). Problem (9) and thus Problem (10) are equivalent to Problem (8) for and are still closely related for small as well. This is why the truncated SVD provides a good initial approximation for Problem (9) and thus also for Problem (10). We solve a series of related subproblems with increasing penalty parameter . Each subsequent subproblem is initialized with the solution of the previous one, providing a good initialization for every subproblem.

Consider the following stopping criteria. For fixed to , stop when the derivatives of the objective function are smaller than with as In practice, if we do not want to compute derivatives, the algorithm is often stopped when there is little change in the column space of , although further progress can potentially be achieved. Note that for small we do not need to solve the problem exactly. Thus, we can stop earlier and avoid slow convergence. Only when becomes large, good approximation is required. We stop iterating when is “large enough”, e.g, .

We declare that is a structured matrix if

 ∥PL−PS(PL)∥2F∥PL∥2F<ε,

for a small , e.g.,

#### 4.1.3 Computational complexity

The main computational cost is due to solving the least squares problems in (13), which is equivalent to solving two systems of linear equations. The computational cost for constructing the matrices and the vector of the systems is smaller than the cost for solving the system. The size of the systems’ matrices are and respectively. Suppose that and recall that Then, the cost for one step of the proposed algorithm is

 O((rn)2(np+mn))=O(n3mr2).

Note that this estimate does not take into account the available structure and sparsity of the matrices in (13). If the structure and sparsity are exploited, faster computation can be performed. Additionally, compared to the kernel approaches, which are fast for large values of (preferably ), the proposed approach is designed for small values of . If the structure is not taken into account, the kernel approaches also have computational cost that is cubic in , namely , and moreover their cost per iteration is higher in for small .

### 4.2 Convergence properties

Algorithm 1 falls into the setting of the penalty methods for constrained optimization [26, §17.1] whose convergence properties are well understood. For fixed , the proposed algorithm is an alternating least squares (or block coordinate descent) algorithm. Since we can solve the least squares problems in (13) in closed form, every limit point of the generated sequence is a stationary point [2, §1.8.1, §2.7][13]. The convergence rate of these methods is linear.

For the convergence properties of the algorithm as , we have the following theorem borrowed from the theory of the quadratic penalty method. We first define as the vector of penalties

 c(P,L):=vec(PL−PS(PL)). (14)
###### Theorem 1.

[26, Th. 17.2.] For and if a limit point of the sequence generated by Algorithm 1 is infeasible, it is a stationary point of the function On the other hand, if a limit point is feasible and the constraint gradients are linearly independent, then is a KKT (Karush-Kuhn-Tucker) point for problem (1). For such points, we have for any infinite subsequence such that that

 limj∈K−λjci((P,L)j)=ν∗i,% for all i=1,…,mn,

where is the multiplier vector that satisfies the KKT conditions.

We next discuss the applicability of the theorem in our setting and some important special cases. We start with a discussion on the constraints.

#### 4.2.1 Constraints

There are constraints in total, defined by

 PL=PS(PL)⟺c(P,L)=0⟺ΠS⊥vec(PL)−vec(S0)=0 (15)

Note, however, that by assumption (A), and thus

 ΠS⊥vec(S0)=(Imn−ΠS)vec(S0)=(Imn−S(S⊤S)−1S⊤)vec(S0)=vec(S0).

Thus, (14) can be written as

 c(P,L)=ΠS⊥(vec(PL)−vec(S0))=ΠS⊥((L⊤⊗Im)vec(P)−vec(S0))=ΠS⊥((In⊗P)vec(L)−vec(S0)). (16)

The optimization variables are the entries of and the entries of and the above equivalences show that each constraint is linear in each optimization variable. It can be concluded that the matrix of constraint gradients (the Jacobian of the vector of constraints) is

 Jc(P,L)=ΠS⊥[L⊤⊗ImIn⊗P]∈Rmn×(mr+nr), (17)

where the gradient of the -th constraint is the -th row of the matrix.

#### 4.2.2 S0=0 and feasibility of the limit points

The class of structures with is particularly interesting because of the following reasons:

• Most common structures ((block-, mosaic-, quasi-) Hankel, (extended) Sylvester, etc.) are in this class.

• Problem (1) always has a solution since the feasible set is closed and nonempty (the zero matrix is in the feasible set).

• The limit points of the sequence generated by Algorithm 1 are feasible, as follows directly from the following proposition.

###### Proposition 1.

Let . Then any stationary point of is feasible.

The proof in given in the appendix.

#### 4.2.3 Equivalent set of constraints

Theorem 1 cannot be applied directly to the set of constraints because We will next transform the set of constraints into an equivalent set of constraints such that the conditions of Theorem 1 can be satisfied for .

Let be a matrix, whose columns span the orthogonal complement of with Then we also have

 ΠS⊥=S⊥(S⊤⊥S⊥)−1S⊤⊥=S⊥S⊤⊥.

Let

 ~c(P,L)=S⊤⊥(vec(PL)−vec(S0)).

Then

 ∥c(P,L)∥22=∥ΠS⊥(vec(PL)−vec(S0))∥22=∥S⊥S⊤⊥(vec(PL)−vec(S0))∥22=∥S⊤⊥(vec(PL)−vec(S0))∥22=∥~c(P,L)∥22. (18)

Note that from (18), it follows that

 c(P,L)=0⟺~c(P,L)=0.

Similarly to (17), the Jacobian of the vector of constraints is

 J~c(P,L)=S⊤⊥[L⊤⊗ImIn⊗P]∈R(mn−np)×(mr+nr). (19)

#### 4.2.4 Independence of the constraint gradients

We will need the following lemma.

###### Lemma 4.

Let and , then

1. The equality holds if and only if

The proof in given in the appendix.

###### Remark 3 (The condition rank(P)=rank(L)=r).

The condition that and have rank is generically satisfied. Nongeneric cases would appear when the exact rank- problem does not have a solution, i.e., when the solution of (1) is of rank lower than . In this case, Algorithm 1 can be modified to detect rank deficiency in and in and reduce their corresponding dimensions, so that the reduced matrices have full column- and full row-rank, respectively.

From (19) and Lemma 4 we have that

 rank(J˜c(P,L))≤min(mn−np,mr+nr−r2) (20)

and thus the following remark holds true.

###### Remark 4 (Necessary and sufficient condition for independence of the constraint gradients).

An assumption of Theorem 1 is that the constraint gradients are linearly independent in the feasible set, i.e., that has full row rank and thus

 rank(J~c(P,L))=mn−np. (21)

From (20) it follows that a necessary condition for (21) to hold is

 mn−np≤mr+nr−r2. (22)

This condition is often satisfied in practice, for example, for all Hankel approximations with rank reduction by .

Generically, the condition (22) is also a sufficient condition for (21) to hold.

Apart from the case described in Remark 4, another extreme case in (20) may happen, namely that .

###### Lemma 5.

If a limit point of the sequence generated by Algorithm 1 is feasible, and , then

1. there exists an affine subspace of dimension such that any is a multiplier that satisfies the KKT conditions.

2. in a neighborhood of

The proof in given in the appendix.

###### Proposition 2.

If a limit point of the sequence generated by Algorithm 1 is feasible, and , then the limit point satisfies first order necessary conditions for (1).

###### Proof.

Lemma 5 implies that if , the limit point satisfies the so-called Constant Rank Constraint Qualification [1]. In this case the KKT conditions are necessary for the point to be a local minimum of the constrained problem, see [1]. ∎

Finally, it may happen that In this case, we conjecture the following.

###### Conjecture 1.

A feasible limit point satisfies the first order necessary conditions if the rank of Jacobian is constant in the neighborhood of that point.

## 5 Numerical experiments

In this section we apply the proposed algorithm on three different problems, namely, system identification, finding a common divisor of polynomials (with noisy coefficients), and symmetric tensor decomposition. Although these problems arise in completely different fields, are essentially different, and require different features (weights, fixed elements, or missing elements), we demonstrate the consistently good performance of Algorithm 1.

### 5.1 Related algorithms

In our Matlab simulations, we compare Algorithm 1 with Cadzow’s algorithm [5] and the kernel-based algorithm slra [22], which also aim to solve problem (1). The former is popular in signal processing applications due to its simplicity and the latter has been recently extended to work with missing data. We next briefly summarize the main ideas behind these algorithms.

Cadzow’s algorithm [5] consists of repeating the following two main steps

• “project” the current structured approximation to a low-rank matrix, e.g., using truncated SVD,

• project the current low-rank matrix to the space of structured matrices.

As shown in [9], Cadzow’s algorithm converges to a structured matrix of rank , but not necessarily to a stationary point of the optimization problem.

The slra algorithm [31] is based on the kernel representation of the rank constraint, i.e.,

 rank(ˆD)≤r⟺RˆD=0 for % some full row rank matrix R∈R(m−r)×m

and uses the variable projection method [11], i.e., reformulation of the problem as inner and outer optimization, where the inner minimization admits an analytic solution. The outer problem is a nonlinear least squares problem and is solved by standard local optimization methods, e.g., the Levenberg–Marquardt method [24]. Generalization to problems with fixed and missing data is presented in [21]. For general affine structures, slra has a restriction on the possible values of the reduced rank . Algorithm 1 overcomes this limitation.

### 5.2 Autonomous system identification

In this section, we will use the fact that the proposed algorithm can handle weighted norms and missing elements. The considered matrices have Hankel structure.

#### Background

In system theory, a discrete-time autonomous linear time-invariant dynamical system [20] can be defined by a difference equation

 θ0y(t)+θ1y(t+1)+⋯+θℓy(t+ℓ)=0,for t=1,…,T−ℓ, (23)

where is the order of the system and is a non-zero vector of model parameters. The problem of system identification is: estimate , given a response of the system.

The difference equation (23) can equivalently be represented as

 (24)

It follows from (24) that the Hankel matrix is rank deficient, i.e.,

 rank(Hℓ+1(y))≤ℓ.

In the more realistic noisy case however, (23) and (24) hold only approximately. The problem of identifying the system then reduces to finding a rank- approximation of The parameter can then be computed from the null space of the obtained approximation.

If enough samples are provided, which is usually the case in engineering applications, another possible reformulation of (23) is the following

 (25)

Compared to (24), the matrix in (25) has more rows and less columns but its rank is still at most . The rank deficiency of is however at least since has rows. (This can also be concluded from the fact that there are now linearly independent vectors in the null space of the matrix.) We can continue reshaping by adding rows and removing columns, e.g., until we get a square matrix if is odd or an almost square matrix if is even. This could be useful since there are indications that truncated SVD of an (almost) square Hankel matrix relates to a better (initial) noise reduction. Note that for large (), the matrix () would have low rank compared to its dimensions (), which is the case the proposed algorithm aims for.

#### Example: system identification in Frobenius norm

In this example, we will use the fact that the proposed algorithm can work with weighted norms. In particular, in order to approximate a structured matrix with a low-rank structured matrix in Frobenius norm, i.e.,

 min^p∥S(p)−S(^p)∥2Fsubject to% rank(S(^p))≤r

we need to take from (6) and (10) to be a diagonal matrix with weights equal to the number of occurrences of each structure parameter We consider the Frobenius norm as a distance measure between the data matrix and the approximation to facilitate the comparison with Cadzow’s algorithm. In the next example, we illustrate the performance of the proposed algorithm with respect to the -norm .

The considered true (noiseless) signal is the sum of the following two exponentially modulated cosines

 y0(t)=y0,1(t)+y0,2(t),y0,1(t)=0.9tcos(π5t),y0,2(t)=151.05tcos(π12t+π4), (26)

, shown in Figure 2.

The rank of the corresponding Hankel matrix is , i.e., , for . We added noise in the following way

 y(t)=y0(t)+0.2e(t)∥e∥2∥y0(t)∥2, (27)

where were drawn independently from the normal distribution with zero mean and unit standard deviation. The added noise increases the rank of , so rank- approximation has to be computed. We denote the approximations by .

We ran Algorithm 1, slra and Cadzow’s algorithm. To facilitate the comparison with slra, we first set (i.e., ). The initial approximation was obtained using the truncated SVD. Since slra is sensitive to the initial approximation, in addition to its default initialization, we ran slra starting from the solution obtained by Kung’s method [16] (row “Kung slra” in Table 2), which is a heuristic method for Hankel structured low-rank approximation.

After iterations, Cadzow’s algorithm still did not converge and the rank of its structured approximation was instead of , with smallest singular value of the approximation . The solution of Algorithm 1 had the smallest approximation error, see Table 2.

The error of the initial approximation reported in Table 2 is computed by finding the closest structured matrix having the same kernel as the truncated SVD approximation. This is achieved by solving the inner minimization problem [22, eq. (f(R))]. The computed trajectories (for one run) are presented in Figure 3.

In a second experiment, we considered the almost square matrix and ran the algorithms again. The initial approximation was again obtained using the truncated SVD. Note that slra can only be run on but can profit from the initial approximation from , by running slra after Kung’s method. The result obtained by slra when initialized with Kung’s method and the result obtained by Algorithm 1 were similar to each other. Cadzow’s algorithm also performed well in this experiment. The numerical errors are presented in Table 3. The computed trajectories (for one run) are presented in Figure 4.

Different realizations of the example lead to slightly different numerical results. However, the main conclusions of this example stay the same.

#### Example: system identification with missing data

In this example, we illustrate the fact that the proposed algorithm can work with matrices having missing elements. We continue with the above example but since Cadzow’s algorithm cannot be applied to the missing data case directly, this time the objective function is with respect to the more standard parameter norm .

Since the rank of the true Hankel matrix is , standard algorithms need at least () consecutive data points for identification. However, in the example below, we removed every th data point, so these algorithms cannot be applied directly. Algorithm 1 and slra, however, can be used. The initial approximation for the missing values was provided by averaging their two neighboring data points, after which the initial approximation for the algorithms was obtained by the truncated SVD.

We ran again two experiments with and , respectively. The numerical errors are presented in Table 4 and Table 5, respectively. The obtained trajectories from Algorithm 1 and slra (initialized with the solution of Kung’s algorithm) are given in Figure 5 and Figure 6, respectively.

For , Algorithm 1 had the smallest approximation error. For , the results of Algorithm 1 and slra (initialized with Kung’s method) were similar, although Algorithm 1 was still better.

Different realizations of the example lead to slightly different numerical results. We also observed that for smaller noise variance, slra and Algorithm 1 compute the same solution, but for higher values of the noise Algorithm 1 is generally more robust.

### 5.3 Approximate common divisor

Another application of structured low-rank approximation and thus of Algorithm 1 is finding approximate common divisors of a set of polynomials. Existence of a nontrivial common divisor is a nongeneric property. Given a noisy observation of the polynomials’ coefficients (or due to round-off errors in storing the exact polynomials coefficients in a finite precision arithmetic), the polynomials have a nontrivial common divisor with probability zero. Assuming that the noise free polynomials have a common divisor of a known degree, our aim in the approximate common divisor problem is to estimate the common divisor from the noisy data. The problem can be formulated and solved as Sylvester structured low-rank approximation problems, see [32]. Since the Sylvester matrix has fixed zero elements, in this example, we use the feature of Algorithm 1 to work with structured matrices having fixed elements.

#### Background

For a polynomial

 a(z)=a0+a1z+⋯+anzn,

define the multiplication matrix

We consider three polynomials , , and and for simplicity let they be of the same degree . A basic result in computer algebra, see, e.g., [15], is that , , and have a nontrivial common divisor if and only if the generalized Sylvester matrix

 ⎡⎢⎣Sn(b)Sn(c)Sn(a)00Sn(a)⎤⎥⎦ (28)

has rank at most . Alternatively, one can consider another type of generalized Sylvester matrix [14]

 ⎡⎢⎣Sn(a)Sn(b)Sn(c)⎤⎥⎦, (29)

whose rank deficiency is equal to the degree of the greatest common divisor of , , and . Formulation (29) is more compact than the one in (28), especially if the number of polynomials is large. These results are generalizable for arbitrary number of polynomials () of possibly different degrees and arbitrary order of the required common divisor.

In the case of inexact coefficients, the generalized Sylvester matrices are generically of full rank. The problem of finding the approximate common divisor is then transformed to the problem of approximating the matrix in (28) or in (29) by low-rank matrices with the same structure. This can be done with Algorithm 1 and with the alternative method slra. For simplicity, in our example we take three polynomials of degree and desired (greatest) common divisor of order one (one common root).

#### Example

Let

 a(z)=5−6z+z2=(1−z)(5