Kernels for linear time invariant system identification

Kernels for linear time invariant system identification

Francesco Dinuzzo Max Planck Institute for Intelligent Systems, Spemannstrasse 38, 72076 Tübingen, Germany (fdinuzzo@tuebingen.mpg.de).
Abstract

In this paper, we study the problem of identifying the impulse response of a linear time invariant (LTI) dynamical system from the knowledge of the input signal and a finite set of noisy output observations. We adopt an approach based on regularization in a Reproducing Kernel Hilbert Space (RKHS) that takes into account both continuous and discrete time systems. The focus of the paper is on designing spaces that are well suited for temporal impulse response modeling. To this end, we construct and characterize general families of kernels that incorporate system properties such as stability, relative degree, absence of oscillatory behavior, smoothness, or delay. In addition, we discuss the possibility of automatically searching over these classes by means of kernel learning techniques, so as to capture different modes of the system to be identified.

Key words. System Identification, Regularization, Reproducing Kernel Hilbert Spaces

AMS subject classifications. 93B30, 47A52, 46E22

1 Introduction

Identification of LTI systems is a fundamental problem in science and engineering [1], which is most commonly solved by fitting families of parametric models via Maximum Likelihood and choosing the final one by means of statistical model selection techniques. While such an approach is well-established and has been proven to be successful in a variety of circumstances, several works have shown that even better performances can be obtained by balancing the data fitting with proper complexity control or regularization. For instance, a recent line of research, that adopts a state-space perspective, promotes the use of penalties based on the nuclear norm that encourage a low McMillan degree, see e.g. [2, 3]. Another line, following [4], adopts an approach based on Bayesian estimation of Gaussian Processes [5] where a suitable prior is defined over possible impulse responses of the system. Equivalently, this amounts to penalize the data fitting with a kernel-based regularization of the impulse response [6], which naturally poses the question of which kernels are best for the purpose of impulse response identification.

In this paper, we propose a systematic study of the problem of kernel selection for LTI system identification from a functional analytic perspective. We start by formulating LTI system identification as an inverse problem to be solved by means of regularization in a function space, in the spirit of [7, 8]. Such point of view provides a unified algorithmic framework that allows to take into account both discrete and continuous time systems. Moreover, it permits to go beyond the standard assumption of uniform time sampling, allowing for arbitrary sampling of the output signal. Knowledge about properties of the system can be incorporated naturally by exploiting standard characterizations of the impulse response. Without loss of generality, we focus on SISO (single input single output) LTI system identification, where the goal is to reconstruct a scalar impulse response function from the knowledge of the input signal and a finite set of output measurements. Nevertheless, the ideas presented in this paper are general enough to be extendable to more complex and structured problems.

A central contribution of this paper is to characterize families of function spaces that encode those properties that are specific to impulse responses of dynamical systems, such as causality, stability, absence of oscillations, relative degree, or delay. In particular, we show how these structural properties can be enforced by designing suitable kernel functions. Importantly, we also provide theoretical support to the empirical evidence that standard translation invariant kernels such as the Gaussian RBF are not well-suited for modeling impulse responses of stable dynamical systems. Finally, we also discuss the possibility of optimizing the kernel by means of methodologies such as multiple kernel learning (MKL) [9, 10], suggesting that these techniques can be rather appropriate in the context of system identification.

2 Kernel based regularization for LTI system identification

In order to handle both continuous and discrete-time system in a unified framework, we refer to an abstract time set , that is a sub-group of .

A dynamical system is called linear time invariant (LTI) if, for any input signal , the output signal is generated according to a convolution equation

where is the impulse response. Depending on the nature of the time set, the symbol has to be interpreted as an integral, a series, or simply a sum.

In the following, we study the problem of identifying the impulse response, assuming availability of the input signal and a finite dataset of (noisy) output measurement pairs

The problem can be tackled by means of regularization techniques, based on minimization problems of the form

\hb@xt@.01(2.1)

where is a Hilbert space of functions, is a loss function that is convex and continuous w.r.t. the second argument, and is a regularization parameter. We assume that the input signal and the space are such that all the point-wise evaluated convolutions are bounded linear functionals, namely, for all , there exists a finite constant such that

Then, there exist unique representers such that

In addition, one can show that any optimal solution of (LABEL:EQUA01) can be expressed in the form

\hb@xt@.01(2.2)

This result, known as the representer theorem, see e.g. [11, 12], shows that the regularization problem (LABEL:EQUA01) reduces to determining a vector of coefficients of the same dimension of the number of observations. More precisely, an optimal vector of coefficients can be obtained by solving the following convex optimization problem

\hb@xt@.01(2.3)

where the entries of the kernel matrix are given by

2.1 Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces [13] are a family of Hilbert spaces that enjoy particularly favorable properties from the point of view of regularization. The concept of RKHS is strongly linked with that of positive semidefinite kernel. Given a non-empty set , a positive semidefinite kernel is a symmetric function such that

A RKHS is a space of functions such that point-wise evaluation functionals are bounded. This means that, for any , there exists a finite constant such that

Given a RKHS, it can be shown that there exists a unique symmetric and positive semidefinite kernel function (called the reproducing kernel), such that the so-called reproducing property holds:

where the kernel sections are defined as

The reproducing property states that the representers of point-wise evaluation functionals coincide with the kernel sections. Starting from the reproducing property, it is also easy to show that the representer of any bounded linear functional is a function such that

Therefore, in a RKHS, the representer of any bounded linear functional can be obtained explicitly in terms of the reproducing kernel.

With reference to the problem (LABEL:EQUA01), we are interested in estimating functions defined over the time set . By expressing the representers in terms of the kernel, the optimal solution (LABEL:EQUA02) can be rewritten as

The entries of the kernel matrix that appears in problem (LABEL:EQUA03) can be computed as

For discrete-time problems, the integral reduces to sums, whereas for continuous-time problems they can be approximated by means of numerical integration techniques. For a variety of kernel functions, the continuous-time integrals can be even computed in closed form, provided that the input signal is known to have a sufficiently simple expression.

3 Enforcing basic system properties

By searching over a RKHS, the impulse response to be synthesized is automatically constrained to be point-wise well-defined and bounded over compact time sets. In this section, we show how several other important properties of the impulse response can be enforced by adopting suitable kernel functions.

3.1 Causality

A dynamical system is said to be causal if the value of the output signal at a certain time instant does not depend on values of the input in the future (for ). Knowledge about causality is virtually always incorporated in the model of a dynamical system. This is done already when the signals are classified as inputs or outputs of the system: the value of the output signals at a certain time is not allowed to depend on the values of the input signals in the future.

For a LTI system, causality is equivalent to vanishing of the impulse response for negative times, namely

\hb@xt@.01(3.1)

The following Lemma characterizes those RKHS that contain causal impulse responses, with a simple condition on the kernel function.

Lemma 1

The RKHS contains only causal impulse responses if and only if the reproducing kernel satisfy

\hb@xt@.01(3.2)

where is the Heaviside step function defined as

and is a kernel defined for non-negative times.

The simple statement of Lemma LABEL:LEM01 already shows that the kernels needed for modeling impulse responses of dynamical systems are quite different from the typical kernels used for curve fitting. In order to encode a “privileged” direction in the time flow, they have to be asymmetric on the real line, and can also be discontinuous.

3.2 Stability

System stability is an important information that is often known to be satisfied by the system under study and should be always incorporated whenever available. Perhaps, the most intuitive notion of stability is the so called BIBO (Bounded Input Bounded Output) condition that can be expressed as

where denotes the norm. BIBO stability entails that the output signal cannot diverge when the system is excited with a bounded input signal. While ensuring stability for methods based on state-space models requires special techniques, see e.g [14, 15], the RKHS regularization framework can handle this constraint very easily. Indeed, it is well-known that for a LTI system, BIBO stability is equivalent to integrability of the impulse response:

Hence, in order to encode stability, it is sufficient to characterize those RKHS that contain only integrable impulse responses. The following Lemma gives a necessary and sufficient condition (see e.g. [16]):

Lemma 2

The RKHS is a subspace of if and only if

We can talk about stability of the kernel, with reference to kernels that satisfy the conditions of Lemma LABEL:LEM02. It can be easily verified that integrability of the kernel is a sufficient condition for to be a subspace of .

Lemma 3

If , then is a subspace of .

It is worth observing that the condition of Lemma LABEL:LEM03 is also necessary for nonnegative-valued kernels (i.e. such that , for all ), as it can be seen by simply setting in Lemma LABEL:LEM02.

3.3 Delay

Let denote the inferior of the time instants where the impulse response is not equal to zero:

By causality, has to be nonnegative. If it is strictly positive, then the system is said to exhibit an input-output delay equal to , meaning that does not depend on for any . The knowledge of the delay can be easily incorporated in the kernel function.

Lemma 4

Every impulse response have a delay equal to if and only if the reproducing kernel is in the form

with in the form (LABEL:EQUA05).

If the value of is unknown in advance, it can be treated as a kernel design parameter to be estimated from the data.

4 Kernels for continuous-time systems

In this section, we focus on properties of continuous-time systems (), such as smoothness of the impulse response and relative degree, and discuss how to enforce them by choosing suitable kernels.

4.1 Smoothness

Impulse responses of continuous-time dynamical systems are typically assumed to have some degree of smoothness. Without loss of generality, we focus on systems without delay (the delayed case can be simply handled via the change of variable discussed in Lemma LABEL:LEM04). Typically, we would like to have continuity of and a certain number of time derivatives, everywhere with the possible exception of the origin. Regularity of the impulse response at is related to the concept of relative degree, which is important enough to deserve an independent treatment (see the next subsection). Impulse responses with a high number of continuous derivatives corresponds to low-pass dynamical systems that attenuates high frequencies of the input signal. It is known that regularity of the kernel propagates to every function in the RKHS. Therefore, knowledge about smoothness of the impulse response can be directly expressed in terms of the kernel function, see e.g. [17].

Lemma 5

Let denote a RKHS associated with the kernel in the form (LABEL:EQUA05) with . If is -times continuously differentiable on , then the restriction of every function to is -times continuously differentiable. In addition, point-wise evaluated derivatives are continuous linear functionals, i.e. for all and , there exists such that

4.2 Relative degree

The relative degree of an LTI system is a quantity related to the regularity of the impulse response at (or in the delayed case). By causality, all the left derivatives of the impulse response (with the convention ) have to vanish:

On the other hand, the right derivatives may well be different from zero. Assuming existence of all the necessary derivatives, the relative degree of a LTI system is a natural number such that

If for all , the relative degree is undefined.

If the relative degree is , then the -th derivative of the impulse response at is discontinuous. Let’s represent the impulse response in the form , where is the Heaviside step function, and assume that admits at least right derivatives at . By using distributional derivatives and properties of the convolution, we have

The -th time derivative of the output is the first derivative that is directly influenced by the input . Therefore, the system exhibits an input-output integral effect equivalent to a chain of integrators on the input of a system with relative degree one.

In many cases, knowledge about the relative degree is available thanks to simple physical considerations. Such knowledge can be enforced by designing the kernel according with the following Lemma.

Lemma 6

Under the assumptions of Lemma LABEL:LEM05, every impulse response has relative degree greater or equal than if and only if

\hb@xt@.01(4.1)

Hence, when the impulse response is searched within an RKHS, the relative degree of the identified system is directly related to the simple property (LABEL:EQUA06) of the kernel function. We can therefore introduce the concept of relative degree of the kernel.

4.3 Examples

The simplest possible kernel of the form (LABEL:EQUA06) is the Heaviside kernel

whose associated RKHS contains only step functions. This kernel has relative degree equal to one and is clearly not stable. As a second example, consider the exponential kernel

\hb@xt@.01(4.2)

This kernel is stable for any , infinitely differentiable everywhere, except over the lines and , where it is discontinuous. Since is discontinuous, the relative degree is equal to one. The associated Hilbert space contains exponentially decreasing functions. A third example is the TC (Tuned-Correlated) kernel [4, 6] defined as

\hb@xt@.01(4.3)

which has relative degree equal to one and can be shown to be stable (see next section).

5 Kernels for stable systems

The exponential kernel defined in (LABEL:EQUA07) satisfies the sufficient condition of Lemma LABEL:LEM03, therefore the associated RKHS contains stable impulse responses of relative degree one (in fact, the space contains only stable exponential functions). Now, assume that a kernel with relative degree one is available. Then, we can easily generate a family of kernels of arbitrary relative degree via the following recursive procedure:

Unfortunately, such procedure does not preserve stability. Consider for example the exponential kernel (LABEL:EQUA07). Although is stable, all the other kernels with do not satisfy the necessary condition of Lemma LABEL:LEM02 (to see this, it is sufficient to choose ) and are therefore not BIBO stable. In the following, we describe some alternative ways of constructing families of stable kernels.

5.1 Mixtures of exponentially-warped kernels

In this subsection, we discuss a technique to construct stable kernels of any relative degree. In order to obtain stability, we adopt a change of coordinates that maps into the finite interval , and then use a kernel over the unit square . Let denote a positive semidefinite kernel, and denote the exponential coordinate transformation

Then, we can construct a class of kernels defined as in (LABEL:EQUA05), where

\hb@xt@.01(5.1)

and is a probability measure. If is a kernel with relative degree one, we can check, using Lemma LABEL:LEM06, that the kernel (LABEL:EQUA09) has relative degree . To ensure BIBO stability, the mass of should not be concentrated around zero and the kernel must vanish sufficiently fast around the origin. The following Lemma gives a sufficient condition.

Lemma 7

Let denote a kernel such that

\hb@xt@.01(5.2)

If the support of does not contain the origin, then the kernel (LABEL:EQUA09) is BIBO stable for all .

A particular case of this construction is the TC kernel (LABEL:EQUA08), where , and (LABEL:EQUA10) is satisfied with . Another example is the cubic stable spline kernel [4], obtained by choosing as the cubic spline kernel (that can be also seen as the covariance function of an integrated Wiener process on ):

A simple calculation shows that condition (LABEL:EQUA10) is satisfied with . By using (LABEL:EQUA09), we can generate a class of stable kernels of arbitrary relative degree. For example, the kernel

is obtained by choosing and as the unit mass on a certain frequency . This kernel is stable and has relative degree equal to .

5.2 Kernels for relaxation systems

Many real-world systems, such as reciprocal electrical networks whose energy storage elements are of the same type, or mechanical systems in which inertial effects may be neglected, have the property that the impulse response never exhibits oscillations. Relaxation systems, see e.g. [18], are dynamical systems whose impulse response is a so-called completely monotone function. An infinitely differentiable function is called completely monotone if

The following characterization of completely monotone functions [19, 20] allows to generalize the basic exponential kernel defined in (LABEL:EQUA07).

Theorem 1 (Bernstein-Widder)

An infinitely differentiable real-valued function defined on the real line is completely monotone if and only if there exists a non-negative finite Borel measure on such that

In view of this last theorem, completely monotone functions are characterized as mixture of decreasing exponentials or, in other words, as Laplace transforms of non-negative measures. Let denote a completely monotone function, and consider the family of functions of the form

\hb@xt@.01(5.3)

By Theorem LABEL:THM01, we can easily verify that (LABEL:EQUA11) defines a positive semidefinite kernel:

Clearly, not every function in the associated RKHS is a completely monotone impulse response. However, all the kernel sections are completely monotone. Now, observe that, unless , the relative degree of kernel (LABEL:EQUA11) is always one. Indeed, if the relative degree is greater than one, then we have , for all . By using Lemma LABEL:LEM03, we can check that, when the support of does not contain the origin, the kernel (LABEL:EQUA11) is BIBO stable:

On the other hand, if the support of contains the origin, we may obtain unstable kernels. For instance, when is the unitary mass centered at the origin, we obtain the Heaviside kernel , which is not stable.

Finally, observe that not all the kernels of the form (LABEL:EQUA11) that vanishes when or tend to are stable, as shown by the following counterexample:

This kernel is indeed of the type (LABEL:EQUA11), since the function is completely monotone. However, the necessary condition of Lemma LABEL:LEM02 is not satisfied with :

5.3 Translation invariant kernels are not stable

In contrast with (LABEL:EQUA11), consider now kernels the form

\hb@xt@.01(5.4)

The following classical result [21] characterizes the class of functions such that is a positive semidefinite kernel.

Theorem 2 (Schoenberg)

Let denote a continuous function. Then, is a positive semidefinite kernel if and only if there exists a non-negative finite Borel measure on such that

Hence, when is the cosine transform of a non-negative measure, the functions of the form (LABEL:EQUA12) are positive semidefinite kernels, since they are the product of the Heaviside kernel and a positive semidefinite kernel.

The family includes oscillating functions

but also widely used kernels like the Gaussian

In view of their popularity in regression, one might be tempted to adopt these kernels for system identification. However, it turns out that, unless , kernels defined by (LABEL:EQUA12) are never stable.

Lemma 8

The only BIBO stable kernel of the form (LABEL:EQUA12) is .

In view of Lemma LABEL:LEM08, we can conclude that the class of kernels (LABEL:EQUA12) is not well-suited for identification of stable systems.

6 Optimizing the kernel

In the previous section, we have encountered families of kernels parameterized by a non-negative measure , such as (LABEL:EQUA09) and (LABEL:EQUA11). It is therefore natural to consider the idea of searching over these classes by optimizing the measure simultaneously with the impulse response . Since the measure defines the kernel, and the kernel in turn identifies the RKHS, such simultaneous optimization amounts to searching for into the union of an infinite family of spaces .

A possible approach to address such joint optimization has been studied in [22], based on the solution of problems of the form

\hb@xt@.01(6.1)

where is the class of probability measures over a compact set . Here, the fact that we search over probability measures (instead of generic non-negative measures) is necessary to make the problem well-posed.

Remarkably, one can still characterize an optimal solution of problem (LABEL:EQUA13) by means of a finite dimensional parametrization. First of all, for any fixed , the standard representer theorem applies to the inner minimization problem, so that the optimal can be expressed in the form (LABEL:EQUA02). In addition, it turns out that there exists an optimal discrete measure with mass concentrated at no more than points:

and therefore, there exists also an optimal kernel that can be written as a finite convex combination of basis kernels:

For instance, by searching over the class (LABEL:EQUA11) (where the support of is restricted to a compact set of the form ), one obtains an optimal kernel of the type

namely a convex combination of decreasing exponential kernels with at most different rates. Unfortunately, the optimal decay rates solve a non-convex optimization problem where non-global local minimizers are possible.

A relaxation of problem (LABEL:EQUA13) consists in fixing a set of parameters over a sufficiently fine grid, and directly searching the measure into the finite-dimensional family

The relaxed problem

\hb@xt@.01(6.2)

boils down to a multiple kernel learning (MKL) problem, after application of the representer theorem:

where is the standard simplex in , and are the kernel matrices associated with the different basis kernels . Albeit not jointly convex, (LABEL:EQUA14) can be solved globally for many loss functions.

6.1 Illustrative Example

The following experiment shows that solving (LABEL:EQUA14) is an effective way to perform continuous LTI system identification with generic time sampling. In addition, it illustrates the advantages of incorporating information such as the relative degree of the system in the kernel design.

Consider the following bimodal impulse response with relative degree :

where , , and . We generate a binary input signal where the switching instants are randomly uniformly drawn from the interval (top panel of Figure LABEL:FIG01). A set of time instants are then drawn uniformly from the interval , and a vector of noisy output measurements is generated as

where , and the are independent zero-mean Gaussian random variables with standard deviation .

We run 50 independent experiments with different realizations of the output noises . For each experiment, we solve the MKL problem (LABEL:EQUA14) with a least square loss function using the RLS2 tool described in [23], and basis kernels of the form

The are chosen on a logarithmic scale in the interval , and is fixed to either 1 or 2. The regularization parameter is selected automatically by minimizing a Generalized Cross Validation (GCV) score [24] over a logarithmic grid. Figure LABEL:FIG01 shows the input and the output signals, together with a set of output measurements and the estimated output signals for one of the 50 experiments. Both and yield excellent estimates of the output signal, also in the region where no measurements are available (). The true and estimated impulse responses are plotted in Figure LABEL:FIG02, showing that the kernels with are able to capture much better the fast mode. By inspecting the coefficients , one can observe that indeed only two of them are different from zero, capturing the two dominant frequencies of the system.

For each of the 50 experiments, performances are evaluated according to the following scores:

measuring the relative improvement in the quadratic estimation error for the impulse response and the output signal with respect to the baselines and , respectively. Figure LABEL:FIG03 shows the boxplots of and for the 50 experiments, highlighting a significant advantage of at estimating the impulse response, and a slight advantage at fitting the output signal.

Figure 6.1: Input and output signals (blue lines), output measurements (green circles), and output estimates.

Figure 6.2: True and estimated impulse responses.

Figure 6.3: Boxplots of (left) and (right) for the 50 experiments.

7 Summary

We have discussed a functional formulation of the LTI system identification problem that allows to handle both discrete and continuous time systems from datasets with arbitrary time sampling, while incorporating several types of structural system properties, such as BIBO stability, relative degree, and smoothness. We have also introduced several examples of kernels, showing that some of them are well-suited to describe stable dynamics while others are not. Finally, we have outlined the potentialities of applying multiple kernel learning methodologies in the context of LTI dynamical system identification.

A Proofs

In this appendix, we provide proofs for all the lemmas presented in the paper.

Proof of Lemma LABEL:LEM01: By the reproducing property, we have

If the kernel satisfies the condition of the Lemma, we have for , so that equals zero for negative . On the other hand, since for all , condition (LABEL:EQUA04) implies

In view of symmetry, it follows that the kernel must necessarily be in the form defined by the Lemma.

Proof of Lemma LABEL:LEM02:

This is an immediate corollary of Proposition 4.2. in [16].

Proof of Lemma LABEL:LEM03:

If is integrable, for all , we have

Proof of Lemma LABEL:LEM04: The proof is similar to that of Lemma LABEL:LEM01. By the reproducing property, we have

If the kernel satisfies the condition of the Lemma, we have for , so that equals zero . On the other hand, since for all , condition (LABEL:EQUA04) implies

In view of symmetry, it follows that the kernel must necessarily be in the form defined by the Lemma.

Proof of Lemma LABEL:LEM05:

The restriction of the kernel to is -times continuously differentiable. Then, by Corollary 4.36 of [17], it follows that the restriction of every function to the interval is -times continuously differentiable, and point-wise evaluated derivatives are bounded linear functionals.

Proof of Lemma LABEL:LEM06:

In view of Lemma LABEL:LEM05, point-wise evaluated derivatives at any are bounded linear functionals. By the reproducing property, we have

If all the impulse responses have relative degree greater or equal than , the left hand side is zero for all and . It follows that

Condition (LABEL:EQUA06) follows from the symmetry of the kernel. Conversely, if condition (LABEL:EQUA06) holds, we immediately obtain

since the inner product is continuos. It follows that the relative degree of any function of the space is greater or equal than .

Proof of Lemma LABEL:LEM07: We have

The thesis follows by applying Lemma LABEL:LEM03.

Proof of Lemma LABEL:LEM08: Consider the necessary condition of Lemma LABEL:LEM02 and let for any . Let

According to Lemma LABEL:LEM02, the kernel defined by (LABEL:EQUA12) is BIBO stable only if is integrable over . A necessary condition is

where we have used the fact that is even, therefore the term containing vanishes. Such condition can be satisfied only if

which implies , and therefore .

References

  • [1] L. Ljung. System Identification: Theory for the User (2nd Edition). Prentice Hall, 2 edition, January 1999.
  • [2] Z. Liu and L. Vandenberghe. Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl., 31(3):1235–1256, November 2009.
  • [3] K. Mohan and M. Fazel. Reweighted nuclear norm minimization with application to system identification. In Proceedings of the American Control Conference, Baltimore (USA), 2010.
  • [4] G. Pillonetto and G. De Nicolao. A new kernel-based approach for linear system identification. Automatica, 46(1):81–93, 2010.
  • [5] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
  • [6] T. Chen, H. Ohlsson, and L. Ljung. On the estimation of transfer functions, regularizations and gaussian processes–revisited. Automatica, 48(8):1525 – 1535, 2012.
  • [7] A. N. Tikhonov and V. Y. Arsenin. Solutions of Ill Posed Problems. W. H. Winston, Washington, D. C., 1977.
  • [8] G. Wahba. Spline Models for Observational Data. SIAM, Philadelphia, USA, 1990.
  • [9] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the smo algorithm. In Proceedings of the twenty-first international conference on Machine learning, pages 6–, New York, NY, USA, 2004. ACM.
  • [10] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine Learning Research, 6:1099–1125, December 2005.
  • [11] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and Applications, 33(1):82–95, 1971.
  • [12] F. Dinuzzo and B. Schölkopf. The representer theorem for Hilbert spaces: a necessary and sufficient condition. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 189–196. 2012.
  • [13] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68:337–404, 1950.
  • [14] T. Van Gestel, J.A.K. Suykens, P. Van Dooren, and B. De Moor. Identification of stable models in subspace identification by using regularization. IEEE Transactions on Automatic Control, 46(9):1416 –1420, sep 2001.
  • [15] S. Siddiqi, B. Boots, and G. J. Gordon. A constraint generation approach to learning stable linear dynamical systems. In Advances in Neural Information Processing Systems (NIPS), Vancouver, Canada, 2008.
  • [16] C. Carmeli, E. De Vito, and A. Toigo. Vector valued reproducing kernel Hilbert spaces of integrable functions and Mercer theorem. Analysis and Applications, 4(4):377–408, 2006.
  • [17] I. Steinwart and A. Christmann. Support Vector Machines. Springer Publishing Company, Incorporated, 1st edition, 2008.
  • [18] J. C. Willems. Dissipative dynamical systems. part II: Linear systems with quadratic supply rates. Archive for Rational Mechanics and Analysis, 45(5):352–393, 1972.
  • [19] S. N. Bernstein. Sur les fonctions absolument monotones. Acta Mathematica, 52:1–66, 1928.
  • [20] D. Widder. The Laplace Transform. Princeton University Press, 1941.
  • [21] I.J. Schoenberg. Metric spaces and completely monotone functions. Annals of Mathematics, 39:811–841, 1938.
  • [22] A. Argyriou, C. A. Micchelli, and M. Pontil. Learning convex combinations of continuously parameterized basic kernels. In Peter Auer and Ron Meir, editors, Learning Theory, volume 3559 of Lecture Notes in Computer Science, pages 338–352. Springer Berlin / Heidelberg, 2005.
  • [23] F. Dinuzzo. Kernel machines with two layers and multiple kernel learning. Technical report, arXiv:1001.2909, 2010.
  • [24] P. Craven and G. Wahba. Smoothing noisy data with spline functions. Numerische Mathematik, 31:377–403, 1979.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
47592
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description