Polar n-Complex and n-Bicomplex Singular Value Decomposition and Principal Component Pursuit

Polar -Complex and -Bicomplex Singular Value Decomposition and Principal Component Pursuit

Tak-Shing T. Chan,  and Yi-Hsuan Yang,  Manuscript received August 26, 2015; revised May 26, 2016 and July 16, 2016; accepted September 3, 2016. Date of publication Month xx, 2016; date of current version September 4, 2016. This work was supported by a grant from the Ministry of Science and Technology under the contract MOST102-2221-E-001-004-MY3 and the Academia Sinica Career Development Program. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Masahiro Yukawa.The authors are with the Research Center for Information Technology Innovation, Academia Sinica, Taipei 11564, Taiwan (e-mail: takshingchan@citi.sinica.edu.tw; yang@citi.sinica.edu.tw).Digital Object Identifier 10.1109/TSP.2016.2612171
Abstract

Informed by recent work on tensor singular value decomposition and circulant algebra matrices, this paper presents a new theoretical bridge that unifies the hypercomplex and tensor-based approaches to singular value decomposition and robust principal component analysis. We begin our work by extending the principal component pursuit to Olariu’s polar -complex numbers as well as their bicomplex counterparts. In so doing, we have derived the polar -complex and -bicomplex proximity operators for both the - and trace-norm regularizers, which can be used by proximal optimization methods such as the alternating direction method of multipliers. Experimental results on two sets of audio data show that our algebraically-informed formulation outperforms tensor robust principal component analysis. We conclude with the message that an informed definition of the trace norm can bridge the gap between the hypercomplex and tensor-based approaches. Our approach can be seen as a general methodology for generating other principal component pursuit algorithms with proper algebraic structures.

Hypercomplex, tensors, singular value decomposition, principal component, pursuit algorithms.
publicationid: pubid: © 2016 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.

I Introduction

The robust principal component analysis (RPCA) [1] has received a lot of attention lately in many application areas of signal processing [2, 3, 4, 5]. The ideal form of RPCA decomposes the input into a low-rank matrix and a sparse matrix :

(1)

where returns the number of nonzero matrix elements. Owing to the NP-hardness of the above formulation, the principal component pursuit (PCP) [1] has been proposed to solve this relaxed problem instead [6]:

(2)

where is the trace norm (sum of the singular values), is the entrywise -norm, and can be set to where is a positive parameter [1, 2]. The trace norm and the -norm are the tightest convex relaxations of the rank and the -norm, respectively. Under somewhat general conditions [1], PCP with has a high probability of exact recovery, though can be tuned if the conditions are not met.

Despite its success, one glaring omission from the original PCP is the lack of complex (and hypercomplex) formulations. In numerous signal processing domains, the input phase has a significant meaning. For example in parametric spatial audio, spectrograms have not only spectral phases but inter-channel phases as well. For that reason alone, we have recently extended the PCP to the complex and the quaternionic cases [7]. However, there exists inputs with dimensionality greater than four, such as microphone array data, surveillance video from multiple cameras, or electroencephalogram (EEG) signals, which exceed the capability of quaternions. These signals may instead be represented by -dimensional hypercomplex numbers, defined as [8]

(3)

where and are the imaginary units. Products of imaginary units are defined by an arbitrary multiplication table, and multiplication follows the distributive rule [8]. If we impose the multiplication rules

(4)

and extend the algebra to include all combinations of imaginary units (formally known as multivectors):

(5)

then we have a Clifford algebra [9]. For example, the real, complex, and quaternion algebras are all Clifford algebras. Yet previously, Alfsmann [10] suggests two families of -dimensional hypercomplex numbers suitable for signal processing and argued for their superiority over Clifford algebras. One family starts from the two-dimensional hyperbolic numbers and the other one starts from the four-dimensional tessarines,111Hyperbolic numbers are represented by where and [10]. Tessarines are almost identical except that [10]. with dimensionality doubling up from there. Although initially attractive, the -dimensional restriction (which also affects Clifford algebras) seems a bit limiting. For instance, if we have 100 channels to process, we are forced to use 128 dimensions (wasting 28). On the other hand, tensors can have arbitrary dimensions, but traditionally they do not possess rich algebraic structures. Fortunately, recent work on the tensor singular value decomposition (SVD) [11], which the authors call the t-SVD, has begun to impose more structures on tensors [12, 13, 14]. Furthermore, a tensor PCP formulation based on t-SVD has also been proposed lately [15]. Most relevantly, Braman [12] has suggested to investigate the relationship between t-SVD and Olariu’s [16] -complex numbers (for arbitrary ). This is exactly what we need, yet the actual work is not forthcoming. So we have decided to begin our investigation with Olariu’s polar -complex numbers. Of special note is Gleich’s work on the circulant algebra [17], which is isomorphic to Olariu’s polar -complex numbers. This observation simplifies our current work significantly. Nevertheless, the existing tensor PCP [15] employs an ad hoc tensor nuclear norm, which lacks algebraic validity. So, in this paper, we remedy this gap by formulating the first proper -dimensional PCP algorithm using the polar -complex algebra.

Our contributions in this paper are twofold. First, we have extended PCP to the polar -complex algebra and the polar -bicomplex algebra (defined in Section III), via: 1) properly exploiting the circulant isomorphism for the polar -complex numbers; 2) extending the polar -complex algebra to a new polar -bicomplex algebra; and 3) deriving the proximal operators for both the polar -complex and -bicomplex matrices by leveraging the aforementioned isomorphism. Second, we have provided a novel hypercomplex framework for PCP where algebraic structures play a central role.

This paper is organized as follows. In Section II, we review polar -complex matrices and their properties. We extend this to the polar -bicomplex case in Section III. This leads to the polar -complex and -bicomplex PCP in Section IV. Experiments are conducted in Sections V and VI to justify our approach. We conclude by describing how our work provides a new direction for future work in Section VII.

Ii The Polar -Complex Numbers

In this section we introduce polar -complex matrices and their isomorphisms. These will be required in Section IV for the formulation of polar -complex PCP. Please note that the value of here does not have to be a power of two.

Ii-a Background

Olariu’s [16] polar -complex numbers, which we denote by , are -dimensional () extensions of the complex algebra, defined as

(6)

where . The first imaginary unit is defined to be whereas are defined by the multiplication table [16]

(7)

We call the real part of and the imaginary parts of for . We remark that our imaginary index starts with , which includes the real part, to facilitate a shorter definition of equations such as (34) and (41). Multiplication follows the usual associative and commutative rules [16]. The inverse of is the number such that [16]. Olariu named it the polar -complex algebra because it is motivated by the polar representation of a complex number [16] where is represented geometrically by its modulus and polar angle . Likewise, the polar -complex number in (6) can be represented by its modulus

(8)

together with azimuthal angles, planar angles, and one polar angle (two if is even), totaling angles [16]. To calculate these angles, let be the discrete Fourier transform (DFT) of , defined by

(9)

where is a principal th root of unity and

(10)

which is unitary, i.e., . For , the azimuthal angles can be calculated from [16]

(11)

where . Note that we have reversed the sign of the angles as Olariu was a physicist so his DFT is our inverse DFT. Furthermore, for , the planar angles are defined by [16]

(12)

where . The polar angle is defined as [16]

(13)

where . Finally, for even , there is an additional polar angle [16],

(14)

where . We can uniquely recover the polar -complex number given its modulus and the angles defined above.222Exact formulas can be found in [16, pp. 212–216], especially (6.80), (6.81), (6.103), and (6.104). We remark that Olariu’s choice of as a reference for the planar and polar angles is convenient but somewhat arbitrary. More importantly, the polar -complex numbers are ring-isomorphic333A ring isomorphism is a bijective map such that , , and for all . to the following matrix representation [16], :

(15)

which is a circulant matrix.444A circulant matrix is a matrix where each column is a cyclic shift of its previous column, such that is diagonalizable by the DFT [18]. More concisely, we can write . This means that polar -complex multiplication is equivalent to circular convolution. Due to the circular convolution theorem, it can be implemented efficiently in the Fourier domain [17]:

(16)

where , denotes circular convolution, and is the Hadamard product. The isomorphism in (15) implies [17]:

(17)
(18)
(19)
(20)

for . From these properties it becomes natural to define the polar -complex conjugation by [17]

(21)

where denotes the conjugate transpose of . This allows us to propose a new scalar product inspired by its quaternionic counterpart [19],

(22)

which we will use later for the Frobenius norm of the polar -complex numbers. Note that this differs from the usual definition [17] because we need the real restriction for the desirable property . To wit, observe that for arbitrary , thus which is the standard inner product between the underlying elements. The same results can also be obtained from . In other words, if and , we get

(23)

An alternative way of looking at the isomorphism in (15) is to consider the circulant matrix as a sum [20],

(24)

where

(25)

following the convention that . It is trivial to show that [20]. Hence the isomorphism is immediately obvious. Recall that the group of imaginary units is called cyclic if we can use a single basis element to generate the entire algebra, so the algebra in (24) has another name called a cyclic algebra [21].

The circulant isomorphism helps us to utilize recent literature on circulant algebra matrices [17], which simplifies our work in the next subsection. The circulant algebra in [17] breaks the modulus into pieces such that the original number can be uniquely recovered without the planar and polar angles. However, for the -norm at least, we need a single number for minimization purposes. Moreover, although our goal is phase preservation, we do not need to calculate the angles explicitly for the PCP problem. Consequently, we will stick with the original definition in (8).

Ii-B Polar -Complex Matrices and Their Isomorphisms

We denote the set of matrices with polar -complex entries by . For a polar -complex matrix , we define its adjoint matrix via [17]:

(26)
TABLE I: Step-by-step illustration of the cft for ; see (10), (26) and (27) for definitions. In general, due to the properties of the circulant blocks, the cft can block diagonalize the adjoint matrix of any polar -complex matrices. Here

We will now show that the -linear map operates in an identical manner as the -linear map .

Theorem 1.

Let . Then the following holds:

  1. if ;

  2. if ;

  3. if ;

  4. ;

  5. if it exists.

Proof.

1, 3, and 4 can be verified by direct substitution. 5 can be derived from 1–2 via the equality . 2 can be proven using (15) and (18):

In other words, the adjoint matrix is an isomorphic representation of the polar -complex matrix . ∎

The above isomorphism is originally established for circulant matrix-vector multiplication [17], which we have just extended to the case of matrix-matrix multiplication. This isomorphism simplifies our work both theoretically and experimentally by allowing us to switch to the adjoint matrix representation where it is more convenient.

Ii-C Singular Value Decomposition

For the SVD of , we first define the stride-by- [22] permutation matrix of order by:

(27)

for . This is equivalent to but more succinct than the standard definition in the literature [22]. The stride-by- permutation greatly simplifies the definition of the two-dimensional shuffle in the following. We define the circulant Fourier transform (CFT) and its inverse (ICFT), in the same way as [17]:

(28)
(29)

where shuffles an matrix containing diagonal blocks into a block diagonal matrix containing blocks. Please refer to Table I to see this shuffle in action. The purpose of is to block diagonalize the adjoint matrix of into the following form [17]:

(30)

while inverts this operation. Here, can be understood as the eigenvalues of the input as produced in the Fourier transform order, as noted by [17]. The SVD of can be performed blockwise through the SVD of [11]:

(31)

then we can use , , and to get , , and where and are unitary [11, 17]. This is equivalent to the t-SVD in tensor signal processing (see Algorithm 1) [11], provided that we store the polar -complex matrix into an real tensor,555By convention, we denote tensors with calligraphic letters. For a three-dimensional tensor , a fiber is a one-dimension subarray defined by fixing two of the indices, whereas a slice is a two-dimensional subarray defined by fixing one of the indices [23]. The -th element of is denoted by . If we indicate all elements of a one-dimensional subarray using the Matlab colon notation, then , , and are called the column, row and tube fibers, respectively [23]. Similarly, , , and are called the horizontal, lateral, and frontal slides, respectively [23]. Notably, Kilmer, Martin, and Perrone [11] reinterprets an tensor as an matrix of tubes (of length ). This is most relevant to our present work when polar -complex numbers are seen as tubes. then the -point DFT along all tubes is equivalent to the CFT. Matrix multiplication can also be done blockwise in the CFT domain with the scaling as before.

0:   // See footnote 5 for tensor notation.
0:  , ,
1:   // Applies -point DFT to each tube.
2:  for  do
3:      // SVD each frontal slide.
4:  end for
5:  
Algorithm 1 t-SVD [11]

Ii-D Proposed Extensions

In order to study the phase angle between matrices, we define a new polar -complex inner product as

(32)

and use it to induce the polar -complex Frobenius norm:

(33)

We propose two further isomorphisms for polar -complex matrices via and :

(34)
(35)

These are the polar -complex matrix counterparts of the tensor and operators, respectively.666Column unfolding reshapes the tensor into a matrix by mapping each tensor element into the corresponding matrix element [23]. We end this subsection by enumerating two elementary algebraic properties of , which will come in handy when we investigate the trace norm later in Theorem 9. The proofs are given below for completeness.

Proposition 2.

If , then the following holds:

  1. ;

  2. .

where are the singular values of obtained from after steps (30) and (31).

Proof.
  1. This is a direct consequence of (20) after observing that . From this we can say that our polar -complex inner product is Euclidean. As a corollary we have .

  2. As the Frobenius norm is invariant under any unitary transformation [24], we can write .

Iii Extension to Polar -Bicomplex Numbers

One problem with the real numbers is that ; that is, they are not algebraically closed. This affects the polar -complex numbers too since their real and imaginary parts consist of real coefficients only. To impose algebraic closure for certain applications, we can go one step further and use complex coefficients instead. More specifically, we extend the polar -complex algebra by allowing for complex coefficients in (6), such that

(36)

where . In other words, both real and imaginary parts of now contain complex numbers (effectively doubling its dimensions). This constitutes our definition of the polar -bicomplex numbers . The first imaginary unit is still and satisfies the same multiplication table in (7). We can now write for the real part of (note the additional ) and for the imaginary parts for (as before, the imaginary part includes the real part for notational convenience). The modulus then becomes

(37)

along with the same angles in (1114). For example, if , we have , , , , and . The polar -bicomplex numbers are ring-isomorphic to the same matrix in (15), and have the same properties (1720). The multiplication can still be done in the Fourier domain if desired. The polar -bicomplex conjugation can be defined in the same manner as (21). Given our new definition of , the scalar product is:

(38)

Note that we still have , because for arbitrary , which gives the Euclidean inner product (likewise for ). So given and , we now have

(39)

Iii-a Polar -Bicomplex Matrices and Their Isomorphisms

Analogously, we denote the set of matrices with polar -bicomplex entries by . The adjoint matrix of can be defined similarly via :

(40)

Next we are going to show that the -linear map operates in the same manner as the -linear map .

Theorem 3.

Let . Then we have:

  1. if ;

  2. if ;

  3. if ;

  4. ;

  5. if it exists.

Proof.

See Theorem 1. ∎

The polar -bicomplex SVD, inner product and Frobenius norm can be defined following (31), (32) and (33). The illustration in Table I still applies. The additional isomorphisms are defined via and :

(41)
(42)
Proposition 4.

If , then the following holds:

  1. ;

  2. ,

where are the singular values of .

Proof.

See Proposition 2. ∎

Iv Polar -Complex and -Bicomplex PCP

PCP algorithms [1, 2] are traditionally implemented by proximal optimization [25] which extends gradient projection to the nonsmooth case. Often, closed-form solutions for the proximity operators are available, like soft-thresholding [26] and singular value thresholding [27] in the real-valued case.

Iv-a Equivalence to Real-Valued Proximal Methods

To fix our notation, recall that the proximity operator of a function is traditionally defined as [25]:

(43)

For or we can use instead of and adjust accordingly. As is invariant under this transformation, we can equivalently extend the domain of to or without adjusting in the following. This equivalence establishes the validity of proximal minimization using polar -complex and -bicomplex matrices directly, without needing to convert to the real domain temporarily.

Iv-B The Proximity Operator for the Norm

We deal with the - and trace-norm regularizers in order.

Lemma 5 (Yuan and Lin [28]).

Let {} be a partition of such that . The proximity operator for the group lasso regularizer is

(44)

where denotes , is a real column vector, and is the sum of the sizes of .

Proof.

This result is standard in sparse coding [28, 29]. ∎

The group lasso is a variant of sparse coding that promotes group sparsity, i.e., zeroing entire groups of variables at once or not at all. When we put the real and imaginary parts of a polar -complex or -bicomplex number in the same group, group sparsity makes sense, since a number cannot be zero unless all its constituent parts are zero, as in the next theorem.

Theorem 6.

The polar -complex or -bicomplex lasso

(45)

where is or , is equivalent to the group lasso

(46)

where is defined as .

Proof.

The proof is straightforward:

The first line invokes the definitions of in (8) and (37), while the second line is due to the proposed isomorphisms in (34) and (41). In other words, we have discovered a method to solve the novel polar -complex or -bicomplex lasso problem using real-valued group lasso solvers. By combining Lemma 5 and Theorem 6, we arrive at the main result of this subsection.

Corollary 7.

For the entrywise -regularizer , where or , we may treat as a long hypercomplex vector of length without loss of generality. Simply assign each hypercomplex number to its own group , for all numbers, and we obtain the proximity operator for using (44):

(47)

where is or and . Here corresponds to the Euclidean norm in (44) and the grouping should follow the definition of for the respective algebra. Note how each entry corresponds to its real-isomorphic group here.

Iv-C The Proximity Operator for the Trace Norm

Next we will treat the trace-norm regularizer. We begin our proof by quoting a classic textbook inequality. In what follows, denotes the singular values of .

Lemma 8 (von Neumann [24]).

For any , the von Neumann trace inequality holds:

(48)
Proof.

This is a standard textbook result [24]. ∎

Theorem 9.

For any or , the following extension to the von Neumann inequality holds:

(49)
Proof.

This theorem embodies the key insight of this paper. Our novel discovery is that we can switch to the block-diagonalized CFT space to separate the sums and switch back:

where and , respectively. The second line is by Lemma 8 and the third line is due to the Cauchy-Schwarz inequality. Using Parseval’s theorem, this theorem is proved. ∎

Theorem 10.

The proximity operator for the polar -complex or -bicomplex trace norm , assuming or , is:

(50)

where , is the SVD of with singular values , the absolute value of is computed entrywise, and is or .

Proof.

The proof follows [29] closely except that Theorem 9 allows us to extend the proof to the polar -complex and -bicomplex cases. Starting from the Euclidean inner product identity , which is applicable because of Propositions 2 and 4, we have the following inequality: