Bare Demo of IEEEtran.clsfor IEEE Journals

Bare Demo of IEEEtran.cls
for IEEE Journals

Sunrita Poddar , Qing Zou and Mathews Jacob,  Sunrita Poddar, Mathews Jacob are with the Department of Electrical and Computer Engineering and Qing Zou is with the Applied Mathematics & Computational Sciences at the University of Iowa, Iowa City, IA, 52242, USA (e-mail:sunrita-poddar@uiowa.edu; zou-qing@uiowa.edu; mathews-jacob@uiowa.edu)This work is supported by grants NIH 1R01EB019961-01A1 and R01 EB019961-02S1.

Sampling of Planar Curves: Theory and Fast Algorithms

Sunrita Poddar , Qing Zou and Mathews Jacob,  Sunrita Poddar, Mathews Jacob are with the Department of Electrical and Computer Engineering and Qing Zou is with the Applied Mathematics & Computational Sciences at the University of Iowa, Iowa City, IA, 52242, USA (e-mail:sunrita-poddar@uiowa.edu; zou-qing@uiowa.edu; mathews-jacob@uiowa.edu)This work is supported by grants NIH 1R01EB019961-01A1 and R01 EB019961-02S1.
Abstract

We introduce a continuous domain framework for the recovery of a planar curve from a few samples. We model the curve as the zero level set of a trigonometric polynomial. We show that the exponential feature maps of the points on the curve lie on a low-dimensional subspace. We show that the null-space vector of the feature matrix can be used to uniquely identify the curve, given a sufficient number of samples. The theoretical guarantees show that the the number of samples required for unique recovery depends on the bandwidth of the underlying trigonometric polynomial, which is a measure of the complexity of the curve. We introduce an iterative algorithm that relies the low-rank property of the feature maps to recover the curves when the samples are noisy or when the true bandwidth of the curve is unknown. We also demonstrate the preliminary utility of the proposed curve representation in the context of image segmentation.

curve recovery, band-limited function, level set, kernels, nuclear norm, denoising.

I Introduction

The recovery of a curve from finite number of unorganized and noisy points is an important problem, with applications to computer vision [1, 2] and image processing [3, 4, 5]. Since one can find infinite number of curves that pass through the points, this problem is inherently ill-posed. In addition, the recovery is challenging due to the topology and its variation with noise. Popular approaches for the representation of shapes with arbitrary topology include (a) explicit representations using a mesh or graphs [1, 6], and (b) implicit level-set representations [7, 8, 9]. In the first scheme, the shape is constructed from the noisy points as a graph, where the nodes corresponding to adjacent data points are connected. By contrast, level set functions are often constructed from the points as distance functions using the information of normals or by solving a linear system of equations involving radial basis functions [10]. Several methods exist to address the issue of noisy data. For instance, smoothing using spectral graph theory, Laplacian/curvature flow [11, 12] are popular approaches in the context of mesh/graph based representation. Similar iterative strategies do exist in the context of level-set schemes. All of these methods suffer from the inherent parametrization of the curve, which often depends on the sampling density.

The main focus of this paper is to introduce a unified continuous domain perspective for the recovery of points on a curve in a plane. We assume that the points live on a smooth curve, which is the zero level set of a bandlimited function. We note that the bandwidth/support is a measure of the complexity of the curves that can be represented as the zero level set. The support of the function may be large in one dimension and much smaller in the other, which allows for the adaptation of the representation to the shape of the curve. We show that the non-linear features of any arbitrary point on such a curve, obtained by the lifting of the point using an exponential map, can be annihilated by the inner-product with the coefficients of the level set function. We term this property as the annihilation relation. The dimension of the feature maps is dependent on the bandwidth, and hence the complexity of the curve. When multiple points on the curve are available, the coefficients of the level set will annihilate the feature vectors of all of the points. Thus, the coefficient vector is a null-space vector of the feature matrix, whose columns correspond to the exponential features of the points. We use this property to determine the sampling conditions, which guarantees the recovery of the curve from finite number of points. Our results show that the bandlimited function, and hence the curve, can be recovered uniquely, when the number of points exceed a certain number that is dependent on the curve complexity.

We also introduce efficient strategies when the bandwidth of the curve is unknown and the samples are noisy. In practical applications, the precise bandwidth/support of the level set function is usually unknown. We show that when the support is overestimated, there exist multiple linearly independent filters that will annihilate the exponential maps; the common zeros, or equivalently the zero level set of the greatest common divisor of the filters, uniquely specifies the curve in this case. This also implies that the feature matrix has multiple linearly independent null-space vectors and hence is low-rank. In other words, the exponential features of the points on the curve live in a finite dimensional space when the bandwidth is overestimated. Note that the Gram matrix of the exponential features correspond to a kernel matrix, which connects the bandlimited curve model with widely used non-linear low-rank kernel methods [13]. When the curve samples are noisy, we rely on a nuclear norm minimization formulation to denoise the points. Specifically, we seek to find the denoised curve samples such that their feature vectors form a low-rank matrix. We use an iterative reweighted algorithm to solve the above optimization problem, which alternates between the estimation of a weight matrix that approximates the nullspace and a quadratic sub-problem to recover the data. We note that the iterative algorithm bears strong similarity to Laplacian/curvature flows used in graph denoising, which provides the connection between implicit level-set and explicit graph-based curve representations. One can also derive a graph Laplacian matrix from the weight matrix, which will facilitate the smoothing of signals that live on the nodes of the graph. This graph can be viewed as a discrete mesh approximation to the points that live on the curve. Our experiments show that the Laplacian matrix obtained by solving the proposed optimization algorithm is more representative of the graph structure than classical methods [14], especially when it is estimated from noisy data. This framework reveals links between recent advances in superresolution theory [15, 16, 17], manifold smoothness based regularization, as well as graph signal processing [18].

This work has connections with Logan’s results [19] for the recovery of 1-D bandlimited functions from their zero crossings, as well as their extensions to 2-D [20]. The main challenge of these works is the extreme sensitivity of the bandlimited function to the location of the zero-crossings, when no amplitude information of the signal is used [20]; this has prompted the use of additional information including multi-level contours [20] and multi-scale edges [21]. By contrast, we focus on the recovery of the curve itself, rather than the bandlimited function, which is considerably simpler. Specifically, we propose to recover the curve as the zero level set of the sum of squares of all band-limited functions that satisfy the sampling conditions. In addition, unlike [20], our results are also valid for the union of irreducible curves. The proposed work is built upon our prior work on annihilation based super-resolution image recovery [22, 16, 23, 24, 25, 26] that has similarities to algebraic shape recovery [27] and the recent work by Ongie et al., which considered polynomial kernels [25]. Our main focus is to generalize [25] to shift invariant kernels, which are more widely used in applications. This approach can also be viewed as the non-linear generalization of the finite rate of innovation theory [28, 29, 30, 31, 32]. We also introduce sampling conditions and algorithms to determine the curve, when the dimension is low. In addition, the iterative algorithm using the kernel trick shows the connections with graph Laplacian based methods used in graph signal processing. The conference version of this work has been published in [33, 34]; this paper provides the proofs of these results, and more elaborate description of details.

Fig. 1: Illustration of the annihilation relations in 2-D. We assume that the curve is the zero level set of a bandlimited function , shown in the top left. The Fourier coefficients of , denoted by , are support limited to the set , denoted by the red square on the figure in the bottom right. Each point on the curve satisfies . Using the representation of the curve specified by (3), we thus have . Note that is the exponential feature map of the point , whose dimension is specified by the cardinality of the set . Note that the feature maps of all the points satisfy these annihilation relations, suggesting that the feature maps lie on a subspace, whose normal vector is specified by .

Ii Background

Ii-a Parametric level set representation of curves

We model the curve in , as the zero level set

(1)

of . We denote the curve specified by the zero level set of by . Note that the level set representation of curves is widely used in segmentation and shape representation [7]. Several authors have proposed to represent as a linear combination of basis functions [8, 9]:

(2)

A popular choice is the shift invariant representation [35] using compactly supported basis functions such as B-splines [9, 8] or radial basis functions [10]. In this case, we have , where is a B-spline or Gaussian function. Here, denotes the number of basis functions, which is equivalent to the number of grid points in the context of shift-invariant representations.

Ii-B Bandlimited level set representation

We now consider the special case of parametric level set curve representation, where the basis functions are chosen as . Then

(3)

Here, the Fourier coefficients are supported on a rectangular grid . See Fig. 1 for an illustration. Note that for each index , the basis functions satisfy and hence are not compactly supported as in [8, 9]. The above bandlimited representation has a one-to-one correspondence with curve representation using complex polynomials, as shown in II-C. We hence also refer to the bandlimited function in (3) as a bandlimited polynomial.

We note that the bandlimited level set model can represent closed curves with arbitrary complexity, as illustrated in Fig. 2. The representation cannot represent open curves in the strict sense. However, we note that such an open curve can be approximated with arbitrary accuracy by a bounding closed curve [36]. As can be appreciated from the figure, the complexity of the curve is dependent on the cardinality of the set , which we denote by . This is the number of free parameters in the curve representation. We also assume that is the smallest set of coefficients (minimal set) that satisfies the above relation. We note that in the 1-D setting (), there is a one-to-one correspondence between the number of points satisfying (1) and the bandwidth [16, 27]; this relation enabled the use of in [16, 26] as a surrogate for sparsity in signal recovery. In higher dimensions, can still serve as a complexity measure, when the recovery of isolated Diracs is considered. However, we emphasize that (3) can provide a significantly richer representation, even when the signal is not isolated and consists of points on a curve.

Fig. 2: Illustration of the type of curves that can be represented as the level set of bandlimited functions. We note that a wide variety of closed curves can be represented, which demonstrates that the representation is not restrictive.

Ii-C Relation between bandlimited functions & polynomials

We now briefly introduce the relation between bandlimited functions and complex polynomials, which we will use in the rest of the paper. We define a one-to-one mapping from trigonometric polynomials to complex polynomials, using the change of variables: . This maps each to the unit circle in the complex plane . This mapping allows us to rewrite (3) as a complex polynomial

We note that is a one-to-one mapping from to . Accordingly, we can say that the two sets

and

are isomorphic. Thus, we can analyze the properties of (including the number of zeros) to obtain the corresponding properties of the trigonometric polynomial (3).

Definition 1 (Irreducible polynomials).

A trigonometric polynomial is termed as irreducible, if the polynomial specified by is irreducible. A polynomial is irreducible over a field of complex numbers, if it cannot be expressed as the product of two non-constant polynomials with complex coefficients.

Definition 2 (Irreducible curve).

A curve is termed as irreducible, if it is the zero level set of an irreducible polynomial.

Iii Sampling of bandlimited curves

Iii-a Annihilation relations for points on the curve

Consider an arbitrary point on the curve specified by (1) and (2). By definition, we have , which translates to:

(4)

Note that is a non-linear mapping or lifting of a point to a high dimensional space, whose dimension is given by the cardinality of the set , denoted by . Note that this non-linear lifting strategy is similar to feature maps used in kernel methods. We hence term as the feature map of the point . Note that every point on the curve satisfies (4), which we term as the annihilation relation.

Let us now consider a set of points on the curve, denoted by . Note that the feature maps of each one of the points satisfy the above annihilation relations, which can be compactly represented as:

(5)

Here, is the feature matrix of the points and .

When is rank-deficient by one, the coefficient vector can be identified as the unique null-space vector of . This implies that the features lie in an dimensional subspace, whose normal is specified by . This annihilation relation is illustrated in Fig 1, in the context of bandlimited curves considered in the next subsection. We will show that there exists a unique nullspace vector when complex exponential basis functions are chosen as in Section II-B.

In practice, the points are often corrupted by noise. In the presence of noise, the null-space conditions are often not satisfied exactly. In this case, we can pose the least square estimation of the coefficients from the noisy data points as the minimization of the criterion:

(6)

where . To eliminate the trivial solution , we pose the recovery as the constrained optimization scheme:

(7)

The solution is the minimum eigenvector of .

Iii-B Sampling of an irreducible bandlimited curve

We now focus on the problem of the recovery of the curve (1), given a few points on the curve. Let us take the bandlimited curve representation as (3) for the rest of the section to derive our sampling conditions. We now determine the sampling conditions for the perfect recovery of the curve using (7), when the curve is specified by (3). In this case, the annihilation relation (4) is satisfied with the feature maps defined as

(8)

We also assume that is a rectangular neighborhood in of size . We first review some results from algebraic geometry.

There is a one-to-one correspondence between trigonometric polynomials and complex polynomials, as shown in Section II-C. We use the extension of Bézout’s inequality for trigonometric polynomials, which bounds the number of solutions of the system that do not have any common factors.

Lemma 3 (Bézout’s inequality for bandlimited polynomials).

Let and be two bandlimited polynomials, whose Fourier coefficients are support limited to and , respectively. If and have no common factor, then the system of equations

(9)

has a maximum of solutions in .

The proof of Lemma 3 is given in Appendix VI-A. We use this property to derive our main results. We first focus on the case where is an irreducible bandlimited function.

Proposition 4.

Let be distinct points on the zero level set of an irreducible bandlimited function , whose Fourier coefficients are restricted to a rectangular region with size . Then the curve can be uniquely recovered by (5), when:

(10)

The proof is provided in Appendix VI-B.

(a) Fourier coefficients: support
(b)
(c)
(d) 5 points
(e) 10 points
(f) 25 points
(g) 50 points
Fig. 3: Illustration of Proposition 5: We consider a curve on the top right, where is support limited to a region, shown on the top left. The level set function is shown in the top middle row. We consider the recovery from different number of samples of , sampled randomly. The sampling locations are marked by red crosses. Note that the theory guarantees recovery, when the number of samples exceeds around samples. We observe good recovery of the curve around 50 samples; note that our results are worst-case guarantees, and in practice fewer samples are sufficient for good recovery.

Note that the above sampling condition does not specify any constraint on the distribution of points on the curve; any set of points are sufficient for the recovery of the curve. This property is similar to well-known results in non-uniform sampling of bandlimited signals [37], where the recovery is guaranteed under weak conditions on the nonuniform grid and the average sampling rate exceeding Nyquist rate.

We compare this setting with the sampling conditions for the recovery of a piecewise constant image, whose gradients vanish on the zero level set of a bandlimited function [16]. The minimum number of Fourier measurements required to recover the function there is . When , then complex Fourier samples are required, which is far more than real samples required for the recovery of the curve in our setting. Note that the constant values within the regions bounded by the curves also need to be recovered in [16], which explains the higher sampling requirement.

Iii-C Sampling theorem for union of irreducible curves

We now generalize the previous result to the setting where the composite curve is a union of multiple irreducible curves. Equivalently, the level set function is the product of multiple irreducible bandlimited functions. We have the following result for this general case:

Proposition 5.

Let be points on the zero level set of a band-limited function , where the bandwidth of is specified by . Assume that has irreducible factors (i.e., ), where the bandwidth of the factor is given by . The curve can be uniquely recovered by (5), when each of the irreducible curves are sampled with

(11)

The total number of samples needed for unique recovery is specified by

(12)

which is upper bounded by .

We note that the upper bound can be approximated as for small values of , which is the upper bound in Proposition 4. The above result is proved in Appendix VI-C. Note that unlike the case considered in Section III-A, an arbitrary set of samples cannot guarantee the perfect recovery. Each of the irreducible curves need to be sampled proportional to their complexity, specified by to guarantee perfect recovery.

We verify the above proposition in Fig. 3. We consider a curve , where is support limited to a region. We consider the recovery from different number of samples of in the middle row, sampled randomly. The random strategy ensures that the samples are distributed to the factors, roughly satisfying the conditions in Proposition 5. Note that the theory guarantees recovery, when the number of samples exceeds around samples. We observe good recovery of the curve around 50 samples; note that our results are worst-case guarantees, and in practice fewer samples are sufficient for good recovery of most curves.

Fig. 4: Effect of number of sampled points on reconstruction error. We randomly generated several curves with different bandwidths and number of sampled points, and tried to recover the curves from these samples. The reconstruction errors of the curves averaged over several trials are shown in the above phase transition plot, as a function of bandwidth and number of sampled entries. It is seen that perfect recovery occurs whenever we have samples.

We further studied the above proposition in Fig. 4. We considered several random curves, each with different bandwidth and considered their recovery from different number of samples. The sampling locations were picked at random. The colors indicate the average reconstruction error between the actual curve and the reconstructed curves. This reconstruction error is computed as the sum of distances between each point on one curve and the closest point to it on the other curve. We have also plotted the upper bound in red, while the number of unknowns in the curve representation is plotted in blue. We note that the curve can be recovered accurately when the number of samples exceed the upper bound. We also note that in general, good recovery can be obtained for most curves, when the number of samples exceed .

Iii-D Curve recovery with unknown Fourier support

Propositions 4 and 5 assume that the true support of the Fourier coefficients of , specified by is known, in addition to the points . However, typically only the points will be known and the filter support will be unknown. We now consider the case where the filter support is over-estimated as . We focus on the recovery of the coefficients from the annihilation relation

(13)

The following result shows that the above matrix will have multiple linearly independent null-space vectors. However, if the curves are sampled as described below, the corresponding bandlimited functions satisfy some desirable properties that facilitate the recovery of the curves.

Proposition 6.

Consider the zero level set of the bandlimited polynomial with irreducible components, as described in Proposition 5. Let the assumed bandwidth of the curve be with and . Then, there exist multiple functions that satisfy . If the irreducible curves of the zero level set of are sampled with

(14)

all of the above functions, or equivalently the right nullspace vectors of , will be of the form:

(15)

where is an arbitrary function such that .

(a) first null-space function of
(b) second null-space function of
(c) third null-space function of
(d) sum-of-squares polynomial
Fig. 5: Illustration of Proposition 6 & 7: We considered the recovery of the same curve in Fig. 3, assuming unknown bandwidth. We over-estimated of the support as 11x11, while the original support of is . The curve was sampled on 100 random sampling locations, denoted by the red crosses. We note that as predicted by Proposition 7. We show three null-space functions of in the first three columns. As can be seen from the figures, all of these functions are zero on the , in addition to possessing several other zeros. The sum of squares function, denoted by (19) is shown on the right column, captures the common zeros, which specifies the curve .

Note that the minimal function is a special case of (15), with . The above result is proved in Appendix VI-D. Since is the common factor of all the annihilating functions, all of them will satisfy , for any point on the original curve as well as the sampling locations. This also implies that is a common divisor of the above functions . In fact, is the greatest common divisor as we will show it in the next paragraph. We now characterize the number of linearly independent annihilation functions, or equivalently the size of the right null space of .

Fig. 6: The over-estimated filter support (blue) is illustrated along with the minimal filter support (red). The set (green) contains all indices at which can be centred, while remaining inside .
Proposition 7.

We consider the trigonometric polynomial described in Proposition 6 and . Then:

(16)

with equality if the sampling conditions of Proposition 6 are satisfied.

Here,

(17)

This set is illustrated in Fig 6 along with and . The above results provide us a means to compute the original curve, even when the original bandwidth/support is unknown. Specifically, Proposition 7 shows that has null-space vectors, each of which satisfies (15). Besides, from the proof of Proposition 7, we can see that any polynomial of the form

(18)

is a null-space vector of . Note that the exponentials are linearly independent, and hence the set spans the null space of . Since does not vanish in the domain, the only common zeros of will be the zeros of the minimal polynomial , meaning that is the greatest common divisor of the functions that span the null-space of . Therefore, the common zeros of these functions, or equivalently the zeros of the greatest common divisor, will specify the curve. A cheaper alternative to evaluating the greatest common divisor is to evaluate the sum of squares polynomial, specified by:

(19)

which will vanish only on points satisfying . Here is the dimension of the right null-space of . Since is a valid right null-space vector of that only vanishes on the true curve, the sum of squares function specified in (19) will only vanish on the true curve. Thus, if the total number of points sampled are , and are arranged as (14), then the curve can be uniquely recovered.

We demonstrate the above result in Fig. 5. We considered the sampling of the same curve illustrated in Fig. 3, with the exception that we over-estimated the support to be as opposed to the true support of . We considered 100 random samples, which satisfies the sampling conditions in Proposition 6. We obtain a rank of as predicted by Proposition 7. We show three of the annihilating functions in the first three columns of Fig. 5. We note that all of these functions are valid annihilating functions, but possess additional zeros. By contrast, the sum of square polynomial shown on the right uniquely specifies the curve.

(a) Example segmentation #1
(b) Example segmentation #2
(c) Example segmentation #3
(d) SOS 1
(e) SOS 2
(f) SOS 3
Fig. 7: Illustration of bandlimited level set curve representation. The curve is specified by the points specified by the user, denoted by the green circles. The sum of square (SOS) polynomial specifying the null-space is evaluated from the points as shown in (19), where correspond to the inverse Fourier transform of . The zero set of the SOS polynomials are indicated by the red curves in the images. Note that the representation can represent curves with arbitrary topology. Also note that the points need not be ordered in any fashion to specify the curve, unlike parametric curves used in snakes [3].

We illustrate the utility of the above results in specifying an arbitrary curve from few unorganized points in Fig. 7. In each of the three examples, the user clicked a few points, which are shown by the green circles. The rank of the feature matrix is determined, followed by the evaluation of the sum of squares function as described in (19). The curve specified by the zero set is illustrated by the red curve in these examples.

Iii-E Application of the curve representation in segmentation

The Mumford Shah functional is a popular formulation for segmenting objects into piecewise constant regions. It approximates an image by a piecewise constant function

(20)

in the sense, where are the regions and are the constants. The bounding curve is denoted by . Different penalties, including the length of or its smoothness are imposed to regularize the optimization problem. We propose to represent as the zero level set of a bandlimited function specified by (3) as in [16]. In this case, the piecewise constant function satisfies , which can be expressed in the matrix form as

(21)

where is a block Toeplitz 2-D convolution matrix and is the vectorized version of . When the bandwidth is over-estimated, has multiple linearly independent null-space vectors and hence the matrix is low-rank. Note that the rank of the matrix can be considered as a surrogate for the complexity of the curve . We hence formulate the segmentation task as the low-rank optimization problem, analogous to [38].

(22)

where is the original image. Note that as , approaches a rank matrix. Once is obtained, the sum of square function of the null space of will specify the curve and is the piecewise constant approximation. We use an alternating minimization strategy as reported in [38] to solve the above optimization scheme.

We demonstrate the preliminary utility of this scheme in Fig. 8, where the alternating algorithm is initialized with and iterated until convergence. The parameter is set to a high value (e.g. in our experiments) to enforce the rank constraint. The rows correspond to the results with different values of rank. The variation of the shape with rank in Fig. 8 shows that the rank of serves as a good surrogate for the complexity of the curve. We note that these experiments are meant to demonstrate the preliminary utility of the curve representation in segmentation. Several novel image energies including region based formulations are available, which offers significantly better performance than the simple edge based formulation considered in this preliminary work. We note that most of these energies can be incorporated easily into this formulation. However, developing this framework to obtain a state of the art segmentation scheme is beyond the scope of this paper and will be dealt elsewhere.

Iv Recovery from noisy samples

We now consider the case where we have noisy measurements of points lying on the curve , where is represented as a linear combination of basis functions as in (4). When the measurements are noisy, we propose to denoise them using the low-rank property of the features discussed in Proposition 7.

(a) Curves: rank=400
(b) SOS: rank=400
(c) : rank=400
(d) Curves: rank=500
(e) SOS: rank=500
(f) : rank=500
(g) Curves: rank=650
(h) SOS: rank=650
(i) : rank=650
Fig. 8: Illustration of edge based segmentation using the bandlimited curve model using (22). The image was initialized as , followed by structured low-rank optimization as described in (22). We note that the optimization scheme is capable of identifying the cells, even though no curve initialization was provided. The different rows correspond to the results with different choices of rank, specified by the parameter in (22). The columns correspond to the final segmentation, bandlimited level set function, and the final constant image respectively. We assumed a bandwidth that is five fold smaller than the size of the image () to construct the lifted Toeplitz matrix ; the number of columns in the matrix is . We note that the rank of the matrix is a good surrogate for the complexity of the curve. Specifically, we note that the curves are simpler when the rank of the matrix is 400, resulting in two regions merging together. By contrast, when the rank is 650, the curves are significantly more complex, resulting in over-segmentation. The rank of 500 seems to be a good tradeoff. Note that this is a simple example to illustrate the utility of the representation. The framework can be enhanced using more powerful region and edge based energies available in the literature [3, 39, 40].

Iv-a Relation to kernel methods

Proposition 7 indicates that we can solve inverse problems by enforcing a low-rank constraint on the feature matrix. However, the size of the feature matrix grows with the dimensionality of the ambient space as well as the size of the over-estimated filter support . Thus, in practice, it might be infeasible to form the feature matrix. However, the Gram matrix given by

(23)

is of size , where is the number of points. Note that the complexity of an algorithm that depends on the Gram matrix is independent of the dimension of the ambient space and the chosen filter support . We note that this approach is similar to the “kernel-trick” used in various machine learning applications. Under our assumed model, the entries of this Gram matrix are given by

(24)

When is a centered cube in , is a Dirichlet function. Note that the entries of can be evaluated without explicitly evaluating the feature matrix. The width of the Dirichlet function is dependent on the Fourier support . The kernel matrix satisfies , where is given by (16). The relationship (23) implies that the kernel matrix is low-rank, which is a property that is widely used in kernel low-rank methods.

A popular practical choice for kernel is Gaussian functions, or equivalently periodized Gaussian functions when the domain is restricted to . Note that the Gaussian kernel is qualitatively similar to the Dirichlet kernel considered above, where the width of the Gaussian is a parameter similar to the size of the support . We note that the Gaussian kernel function is less oscillatory and is isotropic, which makes it more attractive than Dirichlet kernels in applications. The Gaussian kernel correspond to feature maps of the form

(25)

as . Note that this is the Fourier transform of a shifted Gaussian; this setting corresponds to the case where the level set function is being expressed as a shift invariant linear combination of Gaussian functions on a very fine grid. Since the Gaussian kernel matrix is theoretically full rank, the theoretical analysis in the previous section is not directly applicable to Gaussian kernels in the strict sense. However, we observe that the Fourier series coefficients of a Gaussian function may be approximated to be zero outside , which translates to . This implies that the Gaussian kernel matrix can be safely approximated to be low-rank, when is sufficiently high; this corresponds to a more localized kernel in space.

(a) Original
(b) Noisy
(c) #Iter=1
(d) #Iter=50
(e) Original
(f) Noisy
(g) #Iter=1
(h) #Iter=50
(i) Original;EV-2
(j) Exp;EV-2
(k) EV-2
(l) EV-2
(m) Orig; EV-22
(n) Exp; EV-22
(o) EV-22
(p) EV-22
(q) Orig; EV-32
(r) Exp; EV-32
(s) EV-32
(t) EV-32
Fig. 9: Illustration of denoising of 2-D points on a curve from 821 samples using (26): The top row denotes the original samples, noisy samples, the first iteration of (26), and the iterate of (26), respectively. Note that the first iteration of the algorithm is equivalent to conventional Laplacian smoothing, where the Laplacian is estimated from the noisy data. he kernel low-rank algorithm provides good recovery of the points with 50 iterations. The algorithm also provides a robust approach to estimating the Laplacian from noisy data. The bottom three rows correspond to eigenvectors corresponding to the 2nd eigenvalue, 22nd eigenvalue, and 32nd eigenvalue, sorted according to increasing magnitude. Note that the eigenvectors are smooth functions on the curve, analogous to Fourier exponentials in , with the magnitude of the eigenvalues characterizing the smoothness. The columns correspond to eigenvectors estimated from original data using proposed approach, estimated from noisy data using the exponential approach [14], estimated using the proposed approach with one iteration, and estimated using the proposed approach with 50 iterations. Note that the eigenvectors estimated using the proposed approach are considerably smooth varying on the curve compared to the ones estimated using the exponential approach and the first iteration, similar to the ones estimated from the original data. Note that we do not expect the eigenvectors to exactly match. By contrast, the 22 eigenvector estimated using the exponential approach are highly localized and not smooth (see red/blue isolated points). Likewise, the 32 eigenvectors, estimated using exponential and proposed with one iteration appear more localized than the proposed one with 50 iterations.

In the next subsection, we will describe how to use proposition 7, without explicitly forming the feature matrix , and computing only it’s Gram matrix instead.

Iv-B Denoising of point clouds using nuclear norm minimization

With the addition of noise, the points deviate from the zero level set of . A high bandwidth potential function is needed to represent the noisy curve. Let the noisy measurements of the matrix be given by . We propose to use the nuclear norm of the feature matrix as a regularizer in the recovery of the points from noisy measurements:

(26)

We use an IRLS approach where the nuclear norm is approximated as:

(27)

where and is the Gaussian kernel. Here, is a small constant added to ensure that the inverse is well-defined.

The IRLS algorithm alternates between the following two steps:

(28)

where

(29)

Here, , and is a constant.

Note from (28) that updating involves the solution of a non-linear system of equations. We propose to linearize the gradient of the cost function in (28) with respect to .

(30)

Linearizing the gradient with respect to , we obtain:

(31)

where is the entry of a matrix

(32)

In matrix form, the gradient can be rewritten as , where is computed from the weight matrix as

(33)

Here, is a diagonal matrix with elements defined as .

This results in the following equivalent optimization problem for the estimation of at the iteration, which can be solved analytically:

(34)

Thus, we alternate between the estimation of and till convergence to solve (26). We note that this optimization algorithm is non-convex and does not come with any convergence guarantees. However, with reasonable initialization (e.g. ), the algorithm yielded good results in practice.

We illustrate the utility of the kernel low-rank formulation to denoise 2D points in Fig. 9. The first column correspond to noiseless samples, drawn from a TigerHawk logo image. We consider the recovery of this shape from its noisy samples, shown in the second column. Specifically, we added random Gaussian noise to each of the samples. The recovery of the points using conventional Laplacian flow is shown in the third column. Here, the kernel entries are evaluated using the exponential kernel as in [14], which corresponds to the first iteration of our algorithm. Iterating the optimization process yields improved results as shown in the last column. We also show the ability of the algorithm to yield improved Laplacian matrices, as shown from the eigenvectors of the Laplacian matrices in rows 3-5. Note that the eigenvectors of graph Laplacian matrices are smooth functions on the shapes, analogous to Fourier exponentials on the real plane; this property is exploited extensively in spectral graph theory [41, 42] and the spectral features are often used as fingerprints for shape [43, 44]. We note that the second eigenvector of the Laplacian matrix estimated from the original data, noisy data, data smoothed using exponential kernels (1st iteration of our algorithm) and 50th iteration our our algorithm using (33) agree well. By contrast, the 22nd eigenvector estimated from the noisy data disagrees significantly from the original one; it shows a bright spot, indicating a highly localized function on the shape. We note that the 32nd eigenvector of the proposed scheme is still comparable in smoothness to the corresponding original eigenvector, while both others are more localized. This shows that the Laplacian features estimated using the proposed scheme are more reliable predictors of the original shape.

V Conclusion

We introduced a continuous domain framework for the recovery of points on a bandlimited curve. We relied on annihilation relations between complex exponential features of the points on the curve and the coefficient vector of the bandlimited representations to recover the curve. We have introduced sampling conditions that guarantee the recovery of the curve from finite number of its measurements. We also introduce efficient algorithms for the recovery of the points, when they are corrupted by noise. We observe that the Laplacian matrix estimated during the denoising process is a more accurate representation of the shape geometry rather than the ones estimated using conventional methods from noisy data. The preliminary utility of the parametric level set representation is demonstrated in the context of image segmentation.

Vi Appendix

Vi-a Proof of Lemma 3

We first state the well-known result for complex polynomials, which we extend to the bandlimited setting.

Lemma 8.

[45] Let and be two nonconstant polynomials in of degrees and respectively. If and have no common component, then the system of equations

(35)

has at most solutions.

Lemma 3 can be proved by simply substituting and in Lemma 8. Specifically, the degree of and are and respectively. Hence, the maximum number of solutions to (9) is given by .

Vi-B Proof of Proposition 4

Proof.

The Fourier coefficients of is support limited within , which is the minimal support. Let be another bandlimited polynomial, whose Fourier coefficients are support limited within and satisfies , for . When the number of samples satisfy (10), this is only possible if is a factor of , according to Bézout’s inequality. Thus, must be a factor of . Since is irreducible, this implies that it is the unique bandlimited irreducible polynomial satisfying . ∎

Vi-C Proof of Proposition 5

Proof.

The polynomial is represented in terms of its irreducible factors as:

(36)

where the bandwidth of is .

Let be another polynomial with bandwidth satisfying , for . Consider one of the irreducible sub-curves , that is sampled on points satisfying (11). According to Lemma 3, both and can be simultaneously zero at these sampling locations only if and have a common factor. Since is irreducible, this implies that is a factor of . Repeating this line of reasoning for all factors , we conclude that divides . Since both and have the same bandwidth, the only possibility is that is a scalar multiple of . This implies that the curve can be uniquely recovered in (11) is satisfied.

The total number of points to be sampled is .

The support of the Fourier coefficients of can be expressed in terms of the supports of . Using convolution properties, we get: and . Thus, and it can be concluded that . ∎

Vi-D Proof of Proposition 6

Proof.

Following the steps of the proof for Proposition 5, we can conclude that is a factor of . Since , it follows that , where is some arbitrary function such that is bandlimited to . ∎

Vi-E Proof of Proposition 7

Proof.

Let be the minimal filter of bandwidth , associated with the polynomial . We define the following filters supported in for all .

(37)

are the Fourier coefficients of , and are all null-space vectors of the feature matrix . The number of such filters is . Hence, we get the rank bound: .

If the sampling conditions of Proposition 6 are satisfied, then all the polynomials corresponding to null-space vectors of are of the form: . Alternatively, in the Fourier domain, the filters are of the form:

(38)

where are the Fourier coefficients of the arbitrary polynomial . Thus, all the null-space filters can be represented in terms of the basis set . This leads to the relation: . ∎

References

  • [1] Mario Botsch, Mark Pauly, Leif Kobbelt, Pierre Alliez, Bruno Lévy, Stephan Bischoff, and Christian Rössl, “Geometric modeling based on polygonal meshes,” ACM SIGGRAPH 2007 courses - SIGGRAPH ’07, p. 1, 2007.
  • [2] Ramesh Jain, Rangachar Kasturi, and Brian G. Schunck, “Machine Vision Chapter 13 Curves and Surfaces,” Mach. Vis., pp. 365–405, 1995.
  • [3] Mathews Jacob, Thierry Blu, and Michael Unser, “Efficient energies and algorithms for parametric snakes,” IEEE Trans. Image Process., vol. 13, no. 9, pp. 1231–1244, 2004.
  • [4] Virginie Uhlmann, Julien Fageot, and Michael Unser, “Hermite Snakes With Control of Tangents,” IEEE Trans. Image Process., vol. 25, no. 6, pp. 2803–2816, 2016.
  • [5] Ricard Delgado-gonzalo, Virginie Uhlmann, Daniel Schmitter, and Michael Unser, “Snakes on a Plane,” IEEE Signal Process. Mag., vol. 32, no. January, pp. 41–48, 2015.
  • [6] Anais Badoual, Daniel Schmitter, Virginie Uhlmann, and Michael Unser, “Multiresolution Subdivision Snakes,” IEEE Trans. Image Process., vol. 26, no. 3, pp. 1188–1201, 2017.
  • [7] Martin Burger and Stanley J Osher, “A survey on level set methods for inverse problems and optimal design,” Eur. J. Appl. Math., vol. 16, no. 02, pp. 263–301, 2005.
  • [8] Alireza Aghasi, Misha Kilmer, and Eric L. Miller, “Parametric Level Set Methods for Inverse Problems,” SIAM J. Imaging Sci., vol. 4, no. 2, pp. 618–650, 2011.
  • [9] Olivier Bernard, Denis Friboulet, Philippe Thévenaz, and Michael Unser, “Variational B-spline level-set: a linear filtering approach for fast deformable model evolution.,” IEEE Trans. Image Process., vol. 18, no. 6, pp. 1179–1191, 2009.
  • [10] Greg Turk and James F O’Brien, “Shape transformation using variational implicit functions,” Proc. 26th Annu. Conf. Comput. Graph. Interact. Tech. SIGGRAPH 99, vol. 33, no. Annual Conference Series, pp. 335–342, 1999.
  • [11] Keenan Crane, Ulrich Pinkall, and Peter Schröder, “Robust fairing via conformal curvature flow,” ACM Trans. Graph., vol. 32, no. 4, pp. 1, 2013.
  • [12] Andrew Nealen, Takeo Igarashi, Olga Sorkine, and Marc Alexa, “Laplacian mesh optimization,” Siggraph, p. 381, 2006.
  • [13] Bernhard Schölkopf and Alexander J Smola, Learning with kernels: support vector machines, regularization, optimization, and beyond, MIT press, 2002.
  • [14] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” J. Mach. Learn. Res., vol. 7, no. 2006, pp. 2399–2434, 2006.
  • [15] Emmanuel J. Candès and Carlos Fernandez-Granda, “Towards a Mathematical Theory of Super-resolution,” Commun. Pure Appl. Math., vol. 67, no. 6, pp. 906–956, 2014.
  • [16] Greg Ongie and Mathews Jacob, “Off-the-Grid Recovery of Piecewise Constant Images from Few Fourier Samples,” SIAM J. Imaging Sci., 2016.
  • [17] Geoffrey Schiebinger, Elina Robeva, and Benjamin Recht, “Superresolution without separation,” in Comput. Adv. Multi-Sensor Adapt. Process. (CAMSAP), 2015 IEEE 6th Int. Work. IEEE, 2015, pp. 45–48.
  • [18] David I. Shuman, Sunil K. Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” IEEE Signal Process. Mag., vol. 30, no. 3, pp. 83–98, 2013.
  • [19] Benjamin F. Logan, “Information in the zero crossings of bandpass signals,” Bell Syst. Tech. J., vol. 56, no. 4, pp. 487–510, 1977.
  • [20] Avideh Zakhor and Alan V Oppenheim, “Reconstruction ot Two-Dimensional Signals from Level Crossings,” Proc. IEEE, vol. 78, no. 1, pp. 31–55, 1990.
  • [21] Stephane Mallat and Sifen Zhong, “Characterization of Signals From Multiscale Edges,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 7, pp. 710–732, 1992.
  • [22] Gregory Ongie and Mathews Jacob, “A Fast Algorithm for Convolutional Structured Low-Rank Matrix Recovery,” IEEE Trans. Comput. Imaging, vol. 3, no. 4, pp. 535–550, dec 2017.
  • [23] G Ongie and M Jacob, “Recovery of piecewise smooth images from few fourier samples,” 2015.
  • [24] Y.Q. Mohsin, G. Ongie, and M. Jacob, “Iterative Shrinkage Algorithm for Patch-Smoothness Regularized Medical Image Recovery,” IEEE Trans. Med. Imaging, vol. 34, no. 12, 2015.
  • [25] Greg Ongie, Rebecca Willett, Robert D Nowak, and Laura Balzano, “Algebraic Variety Models for High-Rank Matrix Completion,” in Proc. 34th Int. Conf. Mach. Learn., 2017, pp. 2691–2700.
  • [26] Greg Ongie, Sampurna Biswas, and Mathews Jacob, “Convex Recovery of Continuous Domain Piecewise Constant Images From Nonuniform Fourier Samples,” IEEE Trans. Signal Process., vol. 66, no. 1, pp. 236–250, 2017.
  • [27] Mitra Fatemi, Arash Amini, and Martin Vetterli, “Sampling and Reconstruction of Shapes with Algebraic Boundaries,” IEEE Trans. Signal Process., vol. 64, no. 22, pp. 5807–5818, 2016.
  • [28] Martin Vetterli, Pina Marziliano, and Thierry Blu, “Sampling signals with finite rate of innovation,” Signal Process. IEEE Trans., vol. 50, no. 6, pp. 1417–1428, 2002.
  • [29] Thierry Blu, Pier-Luigi Dragotti, Martin Vetterli, Pina Marziliano, and Lionel Coulot, “Sparse sampling of signal innovations,” Signal Process. Mag. IEEE, vol. 25, no. 2, pp. 31–40, 2008.
  • [30] Pier Luigi Dragotti, Martin Vetterli, and Thierry Blu, “Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang–Fix,” Signal Process. IEEE Trans., vol. 55, no. 5, pp. 1741–1757, 2007.
  • [31] Hanjie Pan, Thierry Blu, and Pier Luigi Dragotti, “Sampling curves with finite rate of innovation,” IEEE Transactions on Signal Processing, vol. 62, no. 2, pp. 458–471, 2014.
  • [32] Pancham Shukla and Pier Luigi Dragotti, “Sampling schemes for multidimensional signals with finite rate of innovation,” IEEE Transactions on Signal Processing, vol. 55, no. 7, pp. 3670–3686, 2007.
  • [33] Sunrita Poddar and Mathews Jacob, “Recovery of noisy points on band-limited surfaces: Kernel methods re-explained,” arXiv preprint arXiv:1801.00890, 2018.
  • [34] Sunrita Poddar and Mathews Jacob, “Recovery of point clouds on surfaces: Application to image reconstruction,” in Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on. IEEE, 2018, pp. 1272–1275.
  • [35] Michael Unser, “Sampling — 50 years after shannon,” Proceedings of the IEEE, vol. 88, no. LIB-ARTICLE-2000-002, pp. 569–587, 2000.
  • [36] Catherine Kublik and Richard Tsai, “Integration over curves and surfaces defined by the closest point mapping,” Res. Math. Sci., vol. 3, no. 1, 2016.
  • [37] Shay Maymon and Alan V. Oppenheim, “Sinc interpolation of nonuniform samples,” IEEE Trans. Signal Process., vol. 59, no. 10, pp. 4745–4758, 2011.
  • [38] J P Haldar, “Low-Rank Modeling of Local -Space Neighborhoods (LORAKS) for Constrained MRI,” Med. Imaging, IEEE Trans., vol. 33, no. 3, pp. 668–681, mar 2014.
  • [39] D. Schmitter, R. Delgado-Gonzalo, and M. Unser, “A family of smooth and interpolatory basis functions for parametric curve and surface representation,” Appl. Math. Comput., vol. 272, pp. 53–63, 2016.
  • [40] Anaïs Badoual, Daniel Schmitter, and Michael Unser, “An Inner-Product Calculus for Periodic Functions and Curves,” IEEE Signal Process. Lett., vol. 23, no. 6, pp. 878–882, 2016.
  • [41] Antonio Ortega, Pascal Frossard, Jelena Kovacevic, Jose M. F. Moura, and Pierre Vandergheynst, “Graph Signal Processing: Overview, Challenges, and Applications,” Proc. IEEE, vol. 106, no. 5, pp. 808–828, 2018.
  • [42] David K. Hammond, Pierre Vandergheynst, and Rémi Gribonval, “Wavelets on graphs via spectral graph theory,” Appl. Comput. Harmon. Anal., vol. 30, no. 2, pp. 129–150, 2011.
  • [43] Martin Reuter, Franz-Erich Wolter, and Niklas Peinecke, “Laplace-spectra as fingerprints for shape matching,” SPM ’05 Proc. 2005 ACM Symp. Solid Phys. Model., vol. 1, no. 212, pp. 101–106, 2005.
  • [44] Martin Reuter, Franz Erich Wolter, and Niklas Peinecke, “Laplace-Beltrami spectra as ’Shape-DNA’ of surfaces and solids,” CAD Comput. Aided Des., vol. 38, no. 4, pp. 342–366, 2006.
  • [45] I R Shafarevich, Basic algebraic geometry. 1. Varieties in projective space (Thrid edition), Springer-Verlang, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
321614
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description