Fast, Parameter free Outlier Identification for Robust PCA

# Fast, Parameter free Outlier Identification for Robust PCA

Vishnu Menon, Sheetal Kalyani
Department of Electrical Engineering, Indian Institute of Technology Madras
Chennai, India - 600036
Email: ee16s301@ee.iitm.ac.in, skalyani@ee.iitm.ac.in,
###### Abstract

Robust PCA, the problem of PCA in the presence of outliers has been extensively investigated in the last few years. Here we focus on Robust PCA in the column sparse outlier model. The existing methods for column sparse outlier model assumes either the knowledge of the dimension of the lower dimensional subspace or the fraction of outliers in the system. However in many applications knowledge of these parameters is not available. Motivated by this we propose a parameter free outlier identification method for robust PCA which a) does not require the knowledge of outlier fraction, b) does not require the knowledge of the dimension of the underlying subspace, c) is computationally simple and fast. Further, analytical guarantees are derived for outlier identification and the performance of the algorithm is compared with the existing state of the art methods.

## I Introduction

Principal Component Analysis (PCA) [1] is a very widely used technique in data analysis and dimensionality reduction. Singular Value Decomposition (SVD) of the data matrix M [2] is known to be very sensitive to extreme corruptions in the data [3], [4], [5] and hence robustifying the PCA process becomes a necessity. Robust PCA is typically an ill posed problem and it is of significant importance in a wide variety of fields like computer vision, machine learning, survey data analysis and so on. Recent survey papers [6], [7] outline the various existing techniques for robust subspace recovery and robust PCA. Of the numerous approaches to robust PCA over the years [8], [9], one way to model extreme corruptions in the given data matrix M, is using the following decomposition [10], [11], [12], [3]:

 M=L+S, (1)

where S encapsulates all the corruptions and is assumed to be sparse and L is low rank. Thus robust PCA becomes a process of decomposing the given matrix into a low rank matrix plus a sparse matrix. The problem can be formulated as a convex problem, using techniques of convex relaxation inspired from compressed sensing [13], as [10], [3]

 minimizeL,S ∥L∥∗+λ∥S∥1 s.t M=L+S, (2)

where is the nuclear norm computed as the sum of singular values of a matrix and is the norm of vector formed by vectorizing the matrix. In [3], an optimal value for was proposed and theoretical guarantees for the exact recovery of the low rank matrix was given assuming the popular uniform sparsity model. To solve (2), several algorithms were proposed including [14], [15], [16] with the aim of reducing the complexity of the process and improving speed and performance. Non-Convex algorithms have also been proposed for robust PCA[17], [18] which are significantly faster than convex programs.

Another popular model, the one that we will adopt in this paper, is the outlier model111Throughout the paper, the term outlier model indicates the model where each column of M is either an inlier or an outlier. In this model each column in M is considered as a data point in . The points that lie in a lower dimensional subspace of dimension are the inliers and others which do not fit in this subspace are the outliers. The model is still (1), but the matrix S is column sparse. Several methods have been developed over the years, like methods based on influence functions [1], the re weighted least squares method [19], methods based on random consensus (RANSAC) [20], based on rotational invariant norms [21] etc for the outlier model. In [4], a convex formulation of the process is given and iterative methods have been proposed to solve it. Also the problem has been extended to identifying outliers when the inlying points come from a union of subspaces as in [22], [23]. Recent works have attempted to develop simple non iterative algorithms for robust PCA with the outlier model [5]. Other methods which aims at solving robust PCA through this model include [24], [25] and works based on thresholding like [26]. Most of the algorithms proposed are either iterative and complex and/or would require the knowledge of either the outlier fraction or the dimension of the low rank subspace or would have free parameters that needs to be set according to the data statistics. What we aim to do in this paper is to propose an algorithm for removal of outliers that is computationally simple, non iterative and parameter free. Once the outliers are removed from the data in the outlier model, the classical methods for PCA may be applied for subspace recovery.

### I-a Related work

We briefly describe some of the key literature in the area of robust PCA and highlight how our proposed work differs from and/or is inspired by them. The popular work [3], assuming a uniform sparsity model on the corruptions, solves (2) using Augmented Lagrange Multiplier (ALM) [27] which is an iterative process that requires certain parameters to be set. Ours uses an outlier model and hence we cannot compare our method with the work in [3]. In an outlier model, [4], proposes solving the following convex optimization problem for robust PCA:

 minimizeL,S ∥L∥∗+λ∥S∥1,2 s.t M=L+S, (3)

where is the sum of norms of the columns of the matrix. The paper also proposes a value for the parameter, namely , where is the fraction of outliers in the system. While [4] assumes the knowledge of , in many cases is typically unknown. Another recent work [26] that bases its algorithms on thresholding also requires the knowledge of the target rank, i.e the dimension of the subspace. The work in [22] analyzes the removal of outliers from a system where the inliers come from a union of subspaces and involves solving multiple optimization problems. While there exists a lot of existing techniques and algorithms [28] for solving the optimization problem, most of them requires certain parameters to be set and are iterative. After solving the optimization problem, a data point is classified as an inlier or outlier using thresholding in [22]. Although the proposed threshold in [22] is independent of the dimension of the subspace or the number of outliers, the underlying optimization problem is not parameter free and since multiple optimization problems have to be solved, the procedure is also rather complex.

A fast algorithm for robust PCA was recently proposed in [5] which involves looking at the coherence of the data points with other points and identifying outliers as those points which have less coherence with the other points. The authors give theoretical guarantees for the working of the algorithm for the outlier model. In the two methods that have been proposed for identifying the true subspace, knowledge of either the number of outliers or the dimension of the underlying subspace is required. The algorithm proposed for outlier removal in our work is in spirit a parameter free extension to the work in [5].

### I-B Motivation and proposed approach

The main motivation behind this work is to build parameter free algorithms for robust PCA. By parameter free we mean an algorithm which does not require the knowledge of parameters such as the dimension of true subspace or the number of outliers in the system nor it has a tuning parameter which has to be tuned according to the data. Tuning parameters in any algorithm present a challenge, as the user then would have to decide either through cross validation [29] or prior knowledge on how to set them. Recently, there have been attempts to make algorithms parameter free in the paradigm of sparse signal recovery [30, 31], [32], [33] and these were shown to have results comparable with the ones when the true parameters such as the sparsity of the signal were known. Motivated by this, in this paper we propose a parameter free algorithm for robust PCA. While there exists a vast literature on robust PCA algorithms, to make parameter free variants of them, one would have come up with novel modifications for each of them separately. In this work, we focus on obtaining a computationally efficient parameter free algorithm for outlier removal in robust PCA. The recent work in [5] is both simple and non iterative in the sense that it is a one shot process which does not involve an iterative procedure to solve an optimization problem like in [22]. However it is not parameter free. We propose an algorithm for robust PCA in the outlier model, which does not require the knowledge of number of outliers or the dimension of the underlying subspace and our key contributions are:

• We will demonstrate through theory and simulations, that the proposed parameter free algorithm identifies all the outliers correctly with high probability in the outlier model,

• The proposed method is threshold based and we derive a threshold, which does not require the knowledge of the dimension of the underlying subspace and the number of outliers present, for its calculation. This threshold lower bounds the outlier scores with high probability even in presence of Gaussian noise irrespective of the noise variance.

• Our algorithm guarantees outlier identification, however it does not guarantee that all inliers would be recovered correctly. We derive theoretical conditions under which a good percentage of inliers are recovered and also the conditions when the algorithm does not recover all the inliers points.

• We compare our results with existing algorithms for outlier removal and subspace recovery in terms of subspace recovery error and running time.

Our hope is that this algorithm will serve as a starting point for further progress in parameter free algorithms for robust PCA.

## Ii Problem setup and notations

We are given data points, each from an dimensional space , denoted by , arranged in a data matrix . In this paper we will be working with normalized data points, namely . Here , denotes the norm. Let the normalized data matrix be denoted as . Let denote the unit hypersphere in . Then , i.e the ball in and all points in . We assume that out of the data points, of them lie in a low dimensional subspace of dimension , those we will refer to as inliers and the rest points are randomly spread out on the dimensional space, we will call them outliers. The parameters which is the ratio of number of outliers to the total number of data points and , dimension of the true subspace, are unknown. Let denote the index set of inliers and denote the index set of outliers, i.e and . Hence the matrix X can be segregated as , where are the set of inlier points and are the set of outlier points. Note that and , where denotes the cardinality of a set. We will denote . In this paper the following assumption is made on outliers (same as Assumption 1 in [5]).

###### Assumption 1.

The subspace is chosen uniformly at random from the set of all dimensional subspaces and the inlier points are sampled uniformly at random from the intersection of and . The outier points are sampled uniformly at random from .

The problem we will be focusing on is to remove the set of outliers from the matrix or to find without the knowledge of both the parameters and . We first list some essential definitions.

###### Definition 1.

Let denote the principal angle between two data points and , i.e

 θij=cos−1(xTixj)θij∈[0,π] (4)
###### Definition 2.

The acute angle between two points denoted by is defined as

 ϕij =cos−1(|xTixj|) (5) =⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩θijfor θij≤π2π−θijfor θij>π2 (6)

Clearly . Also .

###### Definition 3.

The minimum angle subtended by a point denoted as is given by

 (7)

We also call as the score222We will, in this paper use the term ”score” of a point to indicate the minimum acute angle from the set of all acute angles subtended by a data point with other points. of a point indexed by . Let represent the values of sorted in descending order, i.e . Let be the index set corresponding to values of after sorting in descending order. From here on for convenience, we will index the sorted scores by , i.e corresponds to the values after sorting in descending order or . We define:

###### Definition 4.

Denote the minimum outlier score as , then

###### Definition 5.

Denote the maximum inlier score as , then

Now we will also define two properties that characterizes an algorithm for outlier removal.

###### Definition 6 (Outlier Identification Property, OIP(α) ).

An algorithm for outlier removal is said to have Outlier Identification Property OIP(), when the outlier index set estimate of the algorithm contains all the true outlier indices i.e with a probability at least .

###### Definition 7 (Exact recovery Property, ERP(α)).

An algorithm for outlier removal is said to have Exact Recovery Property, ERP() when it recovers all the inlier points or with a probability at least .

ERP() is a stronger condition than OIP(). An algorithm which has ERP() will also have OIP() as in this case, with a probability at least .

Other Notations: Let denote the gamma function. denotes the expectation, the variance and the standard deviation of the random variable . denote a normal distribution with mean and variance . Let denote the standard normal cdf, .

## Iii Algorithm and features

We will first discuss in brief the coherence pursuit (CoP) algorithm in [5], since our work can be regarded as a parameter free variant of CoP. The basic principle behind CoP algorithm [5] is that the inlier points are more coherent amongst themselves and the outliers are less coherent. Hence for each point a metric is computed as the norm (either or norm) of a vector in whose components are the coherence values that a point has with all the other data points. The expectation is that once these metrics are sorted in descending order, the inliers come first as the outlier metrics are supposed to be much less compared to the inlier metrics. Then the authors have proposed two schemes to remove the outliers and recover the true underlying subspace. The first scheme tries to remove the outliers and then perform PCA to get the true subspace. Here the outlier removal process assumes the knowledge of the maximum number of outliers in the system. The second scheme is an adaptive column sampling technique that generates an dimensional subspace from inlier points, with the assumption that the parameter is known.

The proposed scheme, works with angles instead of coherence, and the score that we compute is the minimum angle subtended by a point instead of the norm as is done in [5]. We develop a high probability lower bound for outlier scores independent of the unknown parameters, which makes our algorithm parameter free.

### Iii-a Basic Principle and description

The folklore “two high dimensional points are almost always orthogonal to each other” has been rigorously proved in [34] and this is what we exploit. The proposed algorithm works on the principle that, outlier points subtend larger angles (very close to ) with rest of the points, but inlier points, since they lie in a smaller dimensional subspace, subtend smaller angles with other inlier points and hence would have a much smaller score as compared to an outlier. An example of the clear separation between the values as a function of can be seen in Fig 1 for various values of , , , , which shows that the property holds even at low inlier fraction. So the expectation is that when the scores have been sorted in descending order, we will have all the outlier indices appearing first and then after a dip in , the inlier indices appear. In the proposed method we will exploit this property to develop an algorithm that removes outliers and is also parameter free. In the algorithm we classify a point as an outlier whenever its score is greater than a threshold given by

 ζ= [4√π×Γ(n+12)log(11−α2)N2Γ(n2)]1n−1, (8)

is such that , i.e it acts as a lower bound for (See Theorem 1 for more details). The proposed algorithm is given in tabular form as Algorithm 1.

The key steps are,

i First the input data matrix is column normalized and the acute angles subtended by each point with other points as in (5) are calculated for all data points.

ii Then the score for each point is computed by taking the minimum of the angles subtended by that point as in (7). Then the scores are sorted in descending order to get .

iii The last index until where remains above the threshold is noted as the boundary, all the sorted indices before and including this point are classified as outlier indices and the sorted indices beyond this point are classified as inlier indices.

The algorithm which we will call Removal of Outliers using Minimum angle (ROMA), is a parameter free algorithm and requires as input only the data matrix.

### Iii-B Feature - Parameter free

The main feature of the algorithm is that it does not have any dependencies on the unknown parameters. As seen clearly, the threshold we have proposed only requires and for its computation and is also independent of noise statistics. Once all the outliers have been identified and removed, the clean points can be used to obtain a low rank representation using classical PCA by SVD. For the noiseless case, PCA also does not require the knowledge of any parameter. In the presence of additive Gaussian noise ’s in the data, i.e when , , there are several methods for selecting the number of principal components after SVD like BIC [35], geometric AIC [36], and other recent methods proposed in [37], [38].

### Iii-C Feature - Simplicity

ROMA is a simple to implement algorithm and the main complexity lies in computing all the angles subtended. This requires computation of angles and hence the complexity is . It is computationally similar to CoP[5] for its step of outlier identification and hence has the same complexity. The second step to implement robust PCA, would be an SVD on the inlier points recovered by the algorithm, which if implemented without truncation, has a time complexity of [39]. The algorithm does not involve solving a complex optimization problem and is not iterative. The running time comparisons are given in section V.

## Iv Theoretical Analysis of ROMA

In this section we address the following points:

• Give the reasoning behind why we choose the metric as the minimum acute angle to identify outliers

• Derive the high probability lower bound on the outlier scores which is parameter free.

• While the lower bound is derived for a noise free scenario, we will show that it is able to handle Gaussian noise as well.

• Derive the conditions when the algorithm follows OIP() and when it does not follow ERP() given in Definitions 6 and 7.

As a starting point, we will state two lemmas that describe the distribution of the principal angles ’s made by the points. This involves a slight modification of Lemma 12 in [34], to distinguish the angles formed by an inlier and outlier.

###### Lemma 1.

When , ’s are identically distributed and it’s probability density function is given by

 h(θ)=1√πΓ(n2)Γ(n−12)(sinθ)n−2θ∈[0,π] (9)

It has an expected value of . Also is well approximated by a Gaussian pdf with mean and variance .

###### Lemma 2.

When , ’s are identically distributed and it’s probability density function is given by

 h(θ)=1√πΓ(r2)Γ(r−12)(sinθ)r−2θ∈[0,π] (10)

It has an expected value of . Also is well approximated by a Gaussian pdf with mean and variance whenever .

###### Proof.

For the algorithm we use the acute angles ’s instead of ’s. Next we discuss the mathematical reason for it.

### Iv-a Why use ϕij instead of θij and lower bound the outlier score?

From Lemmas 1 and 2, for an outlier point, the principal angle it makes with any other point, be it an inlier or outlier is typically concentrated around especially at large . On the other hand, because the dimension of the subspace is much smaller than , the angle that an inlier makes with another inlier is more spread around . If one wanted to classify a point indexed by , as an inlier or outlier based on the minimum principal angle that it subtends, the following strategy may be employed. The point indexed by is an outlier when , with and being very close to and it is an inlier when or when . This would require looking into different classification regions and two thresholds, which can be avoided by using the acute angle . In this case the classification requires just one threshold, say . When using whose properties have been characterized in appendix B, the minimum acute angle that an outlier makes becomes very close to and a point may be classified as an outlier when . For an inlier, it will be much smaller than , and a point may be classified as an inlier, when . Hence the problem of outlier identification reduces to finding one appropriate threshold to be applied on defined in (7). If we can derive a high probability lower bound on , , such that , , then we can use this as the threshold to classify a point as an inlier or outlier. Further for the algorithm to be parameter free, we derive such that it only depends on the number of data points and the ambient dimension , which are of course always known. The next subsection gives the derivation of the bound .

### Iv-B Parameter free threshold lower bounding qi, i∈O

Below theorem gives the lower bound on outlier scores:

###### Theorem 1.

Under Assumption 1, the minimum acute angle subtended by an outlier point is greater than , with high probability of at least , i.e

 qi>[4√π×Γ(n+12)log(11−α2)N2Γ(n2)]1n−1 ∀i∈O (11) w.p>1−α
###### Proof.

We use Definition 4 for . If , then , . Hence we will work with . Define,

 θOmin=mini∈O minj=1,2,..N,j≠i θij, (12)

which is the minimum principal angle that an outlier point makes. Further define,

 θOmax=maxi∈O maxj=1,2,..N,j≠i θij, (13)

which is the maximum principal angle that an outlier point makes. Note that uses acute angles formed by a point for its computation. We will see, how using principal angles we arrive at a bound for . For this, we use the result given in Lemma 7 in appendix B that 333 where denote stochastic ordering between random variables. , when , for any index set . From equation (25) part (c), in Lemma 7,

 P(qO>x)≥P(θOmin>x)+P(π−θOmax>x)−1 (14)

Therefore to obtain a lower bound on , one proceeds by evaluating and . Note, [40] shows that the angles formed by uniformly chosen random points in space are only pairwise independent and not mutually independent and hence characterizing the exact distribution of is very difficult. Hence we attempt to bound and .

For a moment let us deviate from our problem setting and look at a setting where there are points chosen uniformly at random from . Lets denote this virtual setting as . Let denote the minimum pairwise principal angle amongst them i.e and let . The asymptotic distributions of and are given by Lemma 6 in appendix A. Applying this lemma,

 P(θpmin≤x) =P(π−θpmax≤x)=1−e−Kp2xn−1 (15) for 0≤x≤π, as p→∞,

where . We can state the following remark about the case when is finite.

###### Remark 1.

For points chosen uniformly at random from , and for and as defined before, for ,

 P(θpmin≤x) ≤1−e−Kp2xn−1 (16) P(π−θpmax≤x) ≤1−e−Kp2xn−1

This can be argued as follows. Lets start with points chosen uniformly at random from . Let the index set be denoted as . Then we start sampling more and more points such that the number of points grows to infinity, i.e , lets denote the index set in such a situation as . Clearly , . In other words whenever, , then , while the reverse is not true. Hence,

 P(mini,j∈Jp,i≠j θij≤x) ≤P(mini,j∈J∞,i≠j θij≤x)

The LHS from above is and the RHS in nothing but as . Hence using (15), for ,

 P(θpmin≤x)≤1−e−Kp2xn−1% for finite p.

The same argument can be extended to , to obtain

 P(π−θpmax≤x)≤1−e−Kp2xn−1for finite p.

Let be an index set, and let

 θOpmin =mini∈Op minj∈{1,2...p},j≠i θij (17) θOpmax =maxi∈Op maxj∈{1,2...p},j≠i θij

We know that may be written as

 θpmin=mini∈{1,2...p} minj∈{1,2...p},j≠i θij

Since , . In other words, , for , while the reverse is not true. This is logical as the minimum of a set is always lesser than the minimum of any subset. Hence

 P(θOpmin≤x) ≤P(θpmin≤x)0≤x≤π. ⇒P(θOpmin≤x) ≤1−e−Kp2xn−1from (???)

The same arguments can be extended to as well. In this case, and hence for , and the rest follows. Hence we have for a virtual setting for any , and and defined as in (17),

 P(θOpmin≤x) ≤1−e−Kp2xn−10≤x≤π (18) P(π−θOpmax≤x) ≤1−e−Kp2xn−10≤x≤π.

Now let us look at our problem setting, where the outlier points are sampled uniformly at random from and the other portion of points, the inliers are uniformly sampled at random from the intersection of a lower dimensional subspace and which makes it different from a virtual setting with points in , i.e 444Here we denote the number of points in the virtual setting as instead of , hence . We state the following remark.

###### Remark 2.

Under Assumption 1, an angle that an outlier point make with any other point have the same statistical properties in the inlier-outlier setting and .

This can be argued as follows. Using Assumption 1, we know that the inlier points come from a randomly chosen subspace and also they are sampled uniformly at random in the subspace and so the angle that an outlier point makes with the inlier points have the same statistical behavior as the angle made with a point uniformly sampled at random from . Angle that an outlier point make with another outlier already conforms with . Hence the remark is true. Note that this is not true for the angles formed by any two inlier points.

Therefore result (18) can be directly applied to and , i.e

 P(θOmin≤x) ≤1−e−KN2xn−1 (19) ⇒P(θOmin>x) ≥e−KN2xn−1 and P(π−θOmax>x) ≥e−KN2xn−1

Applying result (19) in (14), we obtain

 P(qO>x)≥2e−KN2xn−1−1

We want a lower bound such that with probability at least . From the above equation, setting and using ,

 2e−Kζn−1N2−1 =1−α⇒e−KN2ζn−1=1−α2 ⇒Kζn−1N2 =log(11−α2)

Using the value for from (24), we arrive at the threshold as in equation (8)

 ζ =[4√π×Γ(n+12)log(11−α2)N2Γ(n2)]1n−1

Hence with probability at least . ∎

Summarizing, we have derived a high probability bound for , , which does not depend on the unknown parameters and . We have already seen how is used in Algorithm 1.

###### Corollary 1.

The outlier index set estimate output of ROMA, contains with probability at least , i.e

 O⊆^Ow.p≥1−α (20)
###### Proof.

The proof is a simple application of Theorem 1. After ordering to the ordered scores , the algorithm chooses such that is the last point when the scores are above . Hence with probability , the sorted indices beyond are not outlier points. Hence, if denotes the sorted indices, with probability or the respective complements, with the same probability. ∎

We have given the guarantee that with high probability. The next point of interest would be to see when the identification is exact, i.e and .

### Iv-C Algorithm properties and guarantees for exact recovery

In this section, we will be looking at the properties of algorithm for exactly recovering the true inlier set. We will state two remarks on ROMA and the properties given in Definitions 6 and 7.

###### Remark 3.

When is as given by (8), ROMA has OIP(). And this is regardless of the number of outliers in the system or the dimension of the underlying subspace.

###### Remark 4.

The algorithm ROMA has ERP() whenever with probability at least and it does not have ERP() whenever with probability at least .

When with probability at least , then with a probability at least and so the algorithm will have ERP(). Next, we provide conditions on and that ensures that on an average, the inlier and outlier scores are well separated. When these conditions are satisfied, the algorithm recovers a good percentage of inliers, so that the subspace can be recovered efficiently. The smaller the rank of the true subspace, the better the results will be in terms on inlier recovery.

###### Lemma 3.

, and , are well separated when,

###### Proof.

In the next theorem we will derive the theoretical condition when ROMA is guaranteed not to have ERP().

###### Theorem 2.

The algorithm ROMA is guaranteed not to have ERP(), when , where is the cdf function of , when and is the cdf function of , when

###### Proof.

This theorem gives us conditions when the algorithm is guaranteed to not have ERP(). In this case the outlier index estimate, and the points classified as outliers is sure to have had inliers as well, i.e . This usually occurs in cases when the outlier fraction is high. Lets fix the alpha value as . As an example, let us take the case when and suppose we have samples, then if we have less than inlier samples, the algorithm is guaranteed to not have ERP(). We will take another example where , say and with the same number of samples . In this case only when the number of inlier samples goes below that the algorithm is guaranteed to not have ERP(). In all these cases the algorithm still has OIP(). All the prior art also derives similar conditions on performance guarantee, for instance coherence pursuit[5] guarantees subspace recovery when the inlier density is sufficiently larger that the outlier density. i.e while outlier pursuit [4] gives conditions on and for successful subspace recovery. The theorem does not state the conditions in which the algorithm is guaranteed to have ERP(), it merely gives us extreme cases where it is not. The following lemma should give us an idea about the ERP() of ROMA.

###### Lemma 4.

Let denote the number of inliers and let be given as in Definition 5. Then we have for , . Hence the algorithm has ERP().

###### Proof.

For proceeding further it is required to characterize the complementary cdf (ccdf) of for . Since the ’s are only pairwise independent and not mutually independent as noted in [34] and [40], finding this ccdf analytically is mathematically very difficult. However one can find empirically through simulations to obtain more insight about inlier recovery properties of ROMA.

### Iv-D Impact of noise on the algorithm

###### Remark 5.

The algorithm ROMA, retains OIP() even in presence of Gaussian noise irrespective of noise variance.

When Gaussian noise is added to an outlier data point, and the noisy outlier is normalized, it is just like selecting it at random from an dimensional hypersphere. Hence all the theory and bounds on the outlier score will not change. Noise will however affect the inlier identification of the algorithm as noise is bound to increase the statistic . This means that more inlier scores would be above the threshold and so would be classified as outliers. So in presence of noise, in order for the algorithm to have ERP(), it would require more number of samples compared to the number that would have been needed had there been no noise, as expected. Mathematically quantifying the impact of noise on subspace recovery for this algorithm will be a part of future research.

### Iv-E Parameter free - What about α?

The algorithm ROMA is parameter free since it does not require any knowledge of or for its working. Note that is merely a term that signifies the probability of success of identification of outliers. When is made smaller, OIP is satisfied by the algorithm with increasing probability but at the same time the value of the threshold reduces. This in turn means more inliers could be classified as outliers, which impacts the inlier recovery property of the algorithm. Note neither depend on data statistics and nor needs to be set by cross validation. If one knows that an application is very sensitive to outliers, can be set to a very low value. Setting would ensure high success of identification of outliers and a reasonable range on the number of inliers where the algorithm has ERP(). Unless otherwise specified, we have set in all our simulations.

## V Numerical Simulations

In this section, we present the simulation results of the proposed method. Here we demonstrate the properties of the proposed algorithm in terms of inlier identification and subspace recovery on synthetic and real data. We also compare our method with some of the existing algorithms for robust PCA, in terms of running time of the algorithm and log recovery error () of the estimated subspace, which is defined as in [5], i.e,

 LRE=log10(∥U−^U^UTU% ∥F∥U∥F), (21)

where U is basis of the true inlier subspace and is the estimated basis from the algorithms. All our experiments on synthetic data assumes the data model described in this paper under Assumption 1. First in this section we provide simulation results to validate the bound proposed in the algorithm.

### V-a Validation of bounds

Fig 2 plots the value of against a range of outlier fraction from to . The value of was calculated for trials and plotted. As seen in the figure, the lower bound holds for all the trials, which means the algorithm always has OIP() regardless of other parameters. The bound as expected increase with increase in , because as dimension increases, the outlier acute angles will concentrate more towards . It decreases with , which is also logical because when the number of points increases, the minimum acute angle that an outlier point make will come down. A special case where Gaussian noise is also added to the system is shown in Fig 1(d). Each data vector was corrupted with noise before normalization and the noise level was kept at for the experiment. Here too the bound holds in all the trials. Also plotted in the graphs is the expected value of the maximum of inlier scores , calculated over these trials. This gives us an indication of ERP(). When the value crosses over the threshold , it means that the inliers recovered will be smaller in number. As seen, when the ratio is small, the algorithm has ERP() even at very low inlier fraction (Fig 1(c)). Also when the number of samples increases, for the same value of and , the inlier recovery performance improves across a higher range of as seen when comparing Fig 1(b) and 1(a). The effect of noise in the system can be observed by comparing Fig 1(c) and 1(d). plot is higher in Fig 1(d) which indicates that lesser number of inliers are recovered in the noisy case when compared to the noiseless case.

### V-B Phase transitions

In this section, we look at the properties of the proposed algorithm in terms of percentage of inliers recovered and error in subspace recovery. First we look at the percentage of inliers recovered by ROMA against varying and the number of inliers , as these two are the critical parameters that determines the inlier recovery property of the algorithm. Fig 3 shows the phase transition on inlier recovery. White indicates inlier recovery and as the squares become darker, the inlier recovery becomes more poor. For this experiment we have set and varied from to , i.e varies from to , along with varying the number of inliers from to keeping the total number of points . As can be interpreted from the figure, a very high percentage of inliers are recovered for even very small , when is sufficiently low. As increases, the percentage of inliers recovered decreases. This simulation result agrees with Lemma 3. Evaluating the requirement in Lemma 3, for , whenever , a large portion of inliers are recovered even at very low inlier fractions. In Fig 3, even for , the inlier recovery is when the condition in Lemma 3 is satisfied by .

In the next experiment we have looked at the subspace recovery property of the algorithm. Here we have considered the noiseless case. After removing outliers through ROMA, the subspace is recovered after doing SVD on the remaining points and choosing the left singular vectors corresponding to the non zero singular values as the recovered subspace basis. A subspace is said to be recovered when for the estimated subspace. We have done trials each on a range of values of the number of inliers and number of outliers. Fig 4 plots the percentage of trials in which the true subspace was recovered against and the number of outliers, with white indicating success. For this phase transition plot, we have set and . It is evident from Fig 4 that, whenever there is a good enough number of inliers, no matter what the number of outliers is and for a smaller value of , which is in the plot, the subspace is recovered with minimal error. The subspace recovery suffers when the number of inliers is low, as seen in the last row of Fig 4. This can be compared with a similar plot in [5], even in the worst case when , where [5] fails, the proposed algorithm has better inlier recovery over a wider range of the number of outliers.

### V-C Advantage of being parameter free

Here, we highlight through a small experiment why assuming the knowledge of parameters becomes tricky in certain scenarios and how ROMA manages to avoid this pitfall. In Coherence Pursuit (CoP) [5], after arranging the points in terms of decreasing scores, the algorithm picks the first points to recover the subspace through PCA. Choosing requires the knowledge on an upper bound on the number of outliers as it is required that and also should be reasonably larger than for successful subspace recovery.

Here we run two experiments - first for different values of with the number of points chosen to form the subspace in CoP [5] as . The results are shown in Fig 4(a). As seen in the figure, the performance of CoP deteriorates heavily in terms of , when the number of inliers goes below the value of , i.e when the assumption about outlier fraction is wrong. This is highlighted through the second experiment, where we show how affects the performance when is fixed. When goes above , outlier points are also chosen to form the subspace which corrupts the estimated subspace badly, as seen in Fig 4(b). In all the cases the performance of ROMA is unaffected since it is parameter free. We have set and for these experiments. This exercise highlights the importance of an algorithm being parameter free and the effects of an incorrect estimate of a parameter on the performance of a parameter dependent algorithm such as CoP.

### V-D Comparison with other state of the art algorithms

Here we compare the proposed algorithm with existing techniques CoP [5], Fast Median Subspace, FMS [24], Geometric Median Subspace GMS [25] and the outlier removal algorithm outlier removal for subspace clustering (denoted by ORSC for convenience) in [22], in terms of the log recovery error and running time. For FMS, we used algorithm 1 in [24], with default parameter setting i.e , maximum iterations 100. For GMS as well, we used the default parameter settings and chose the last columns from the output matrix as the basis of the estimated subspace. For CoP, we implemented the first method proposed, where we used the number of data points chosen for subspace recovery as , which is a value always less than the number of inliers in our experimental settings and hence works well. For ORSC, we used the algorithm using primal-dual interior point method from the magic code repository [41] for solving the underlying optimization problem. The parameters used were changed from the default settings to improve convergence rate without degrading the performance. In Table I, we have summarized each algorithm in terms of its performance measured in terms of log recovery error at various outlier fractions, running time and the parameters used by the algorithm for its working. Also the last column indicates other free parameters that an algorithm requires like regularization or convergence parameters. For the experiments in Table I, we have set the values as . It is observed that ROMA performs at par with the existing methods in terms of without requiring the knowledge of or and is nearly as quick as CoP. The algorithm ORSC which also does not use parameter knowledge, has similar values but is much slower compared to ROMA and also requires multiple parameters like a convergence criterion for solving the underlying optimization problem.

Fig 6 shows a graph on the of algorithms against the number of outliers in the system when the number of inliers are fixed. For this graph we have used and , and also fixed the number of inliers to . It is seen that ROMA performs at par with all the other state of the art methods in this scenario as well.

### V-E Performance on Real data

Here we perform a small experiment on real data. In this experiment, we choose a random subset of size of the well-known MNIST handwritten digit data set ( images) and corrupt a fraction of them by adding heavy Gaussian noise. The samples of corrupted and uncorrupted data are given in Fig 7. The corrupted points are considered outliers and the uncorrupted data points are considered inliers. The aim of the experiment is to recover all inliers and remove outliers. For processing, each image was converted to a vector of dimension and the pixel values were rescaled from the range to by subtracting from each. Hence the data matrix has outlier points. Then we perform outlier removal on this matrix M using both ROMA and CoP and compare the results. For CoP we experimented with three cases of the parameter (, the number of inliers, and ) given as input to the algorithm. We varied the value of from to , i.e the number of inliers vary from to . Note that this experiment setting does not conform to Assumption 1 since neither the inliers come from a very low rank sub space nor the outliers after normalization follow a uniform distribution in , hence the theoretical guarantees need not hold. We performed the experiment over trials with different data set in each trial and averaged the results. For all the cases of , all the inliers were recovered by both ROMA and CoP with . For other values of , CoP exactly recovered inliers. We also compared the percentage of outliers in the inlier estimate for both the algorithms Fig 8. ROMA, in worst case at , has outliers in its inlier estimate and when is low, this value is very close to . Even though CoP has in all the cases when ( case shown in Fig 8), when , the outlier content in the inlier estimate goes up to . Considering that is unknown and that in an unsupervised scenario cannot be set by cross validation, this experiment also highlights the importance of the parameter free nature of ROMA.

## Vi Conclusion and future research

In this paper a simple, fast, parameter free algorithm for robust PCA was proposed, which does outlier removal without assuming the knowledge of dimension of the underlying subspace and the number of outliers in the system. The performance was analyzed both theoretically and numerically. The importance of this work lies in the parameter free nature of the proposed algorithm since estimating unknown parameters or tuning for free parameters in an algorithm is a cumbersome task. Here the popular outlier model was considered with the outliers assumed to be sampled uniformly from a high dimensional hypersphere and the inliers lying in a single low dimensional subspace. This may be considered as a starting point for developing parameter free algorithms for more complicated data models like ones with structured outliers, inliers sampled from a union of subspaces etc.

## Appendix A Some useful Lemmas, proofs of Lemmas 1 and 2

###### Lemma 5 (Lemma 12 from [34]).

Let be random points independently chosen with uniform distribution in , and let be defined as in equation 4, then probability density function of is given by

 h(θ)=1√πΓ(n2)Γ(n−12)(sinθ)n−2θ∈[0,π] (22)

Lemma 5 implies that the angles are identically distributed , with the pdf . The expectation of this distribution is and the angles concentrate around as grows. Using the results in [34], we will state the following:

###### Remark 6.

can be approximated by the pdf of Gaussian distribution with mean and variance , for higher dimensions specifically for . In fact converges weakly in distribution to as .

The remark has been validated in [34]. Using the above results now we can prove Lemmas 1 and 2.

###### Proof of Lemma 1.

Now, in our problem setting, by using Assumption 1, we have set the outliers to be chosen uniformly at random from all the points in and the subspace is also chosen uniformly at random. Hence to an outlier, all the other points are just a part of a set of uniformly chosen independent points in . The results in the lemma then follows directly from Lemma 5, Remark 6 and Assumption 1. ∎

###### Proof of Lemma 2.

Assuming that the dimension of the subspace where the inliers lie, is also large enough i.e , the points within a subspace are uniformly chosen points in the hypersphere and their distribution is as in equation (22) with replaced by . Hence from Lemma 5, Remark 6 and Assumption 1, the lemma is proved. ∎

Finally in this section, we also state an important result from [34] used in the proof of Theorem 1.

###### Lemma 6 (Theorem 2 from [34]).

Suppose there are vectors selected uniformly at random from the unit hypersphere Let denote the minimum pairwise angle amongst them i.e and let , where ’s are as defined in (4). Then both and converge weakly as , to the distribution given by:

 F(x) ={1−e−Kxn−1for x≥00for x<0 (23)

where

 K=1√4πΓ(n2)Γ(n+12) (24)

This lemma gives us the asymptotic distribution of the minimum principal angle amongst all the angles formed between any two points from a set of uniformly chosen random points on the hypersphere . Hence as , . Equivalently, . Use and get