Fitting Jump Models

Fitting Jump Models

[    [    [    [ IMT School for Advanced Studies Lucca, Piazza San Francesco 19, 55100 Lucca, Italy. Email: alberto.bemporad@imtlucca.it Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza L.Da Vinci, 32, 20133 Milano, Italy. Email: valentina.breschi@polimi.it Dalle Molle Institute for Artificial Intelligence Research - USI/SUPSI, Galleria 2, Via Cantonale 2c, CH-6928 Manno, Switzerland. Email: dario.piga@supsi.ch Department of Electrical Engineering, Stanford University, Stanford CA 94305, USA. Email: boyd@stanford.edu
Abstract

We describe a new framework for fitting jump models to a sequence of data. The key idea is to alternate between minimizing a loss function to fit multiple model parameters, and minimizing a discrete loss function to determine which set of model parameters is active at each data point. The framework is quite general and encompasses popular classes of models, such as hidden Markov models and piecewise affine models. The shape of the chosen loss functions to minimize determine the shape of the resulting jump model.

M
\runtitle

Fitting Jump Models

IMT]A. Bemporad\thanksreffootnoteinfo, POLIMI]V. Breschi, IDSIA]D. Piga, STANFORD]S. Boyd

thanks: [

footnoteinfo]Corresponding author.

odel regression, mode estimation, jump models, hidden Markov models, piecewise affine models.

1 Introduction

In many regression and classification problems the training dataset is formed by input and output observations with time stamps. However, when fitting the function that maps input data to output data, most algorithms used in supervised learning do not take the temporal order of the data into account. For example, in linear regression problems solved by least squares each row of and is associated with a data-point, but clearly the solution is the same no matter how the rows of and are ordered. In system identification temporal information is often used only to construct the input samples (or regressors) and outputs, but then it is neglected. For example, in estimating autoregressive models with exogenous inputs (ARX), the regressor is a finite collection of current and past signal observations, but the order of the regressor/output pairs is irrelevant when least squares are used. Similarly, in logistic regression and support vector machines the order of the data points does not affect the result. In training forward neural networks using stochastic gradient descent, the samples may be picked up randomly (and more than once) by the solution algorithm, and again their original temporal ordering is neglected.

On the other hand, there are many applications in which relevant information is contained not only in data values but also in their temporal order. In particular, if the time each data-point was collected is taken into account, one can detect changes in the type of regime the data were produced. Examples range from video segmentation [24, 10] to speech recognition [29, 30], asset-price models in finance [31, 17], human action classification [27, 26], and many others. All these examples are characterized by the need of fitting multiple models and understanding when switches from one model to another occur.

Piecewise affine (PWA) models attempt at fitting multiple affine models to a dataset, where each model is active based on the location of the input sample in a polyhedral partition of the input space [14, 9]. However, as for ARX models, the order of the data is not relevant in computing the model parameters and the polyhedral partition. In some cases, mode transitions are captured by finite state machines, for example in hybrid dynamical models with logical states, where the current mode and the next logical state are generated deterministically by Boolean functions [4, 8]. In spite of the difficulty of assessing whether a switched linear dynamical system is identifiable from input/output data [32], a rich variety of identification methods have been proposed in the literature [14, 5, 20, 3, 22, 9, 28].

Hidden Markov models (HMMs) treat instead the mode as a stochastic discrete variable, whose temporal dynamics is described by a Markov chain [29]. Natural extensions of hidden Markov models consider the cases in which each mode is associated with a linear function of the input [15, 11, 25]. Hidden Markov models are usually trained using the Baum-Welch algorithm [1], a forward-backward version of the more general Expectation Maximization (EM) algorithm [12].

In this paper we consider rather general jump models to fit a temporal sequence of data that takes the ordering of the data into account. The proposed fitting algorithm alternates two steps: estimate the parameters of multiple models and estimate the temporal sequence of model activation, until convergence. The model fitting step can be carried out exactly when it reduces to a convex optimization problem, which is often the case. The mode-sequence step is always carried out optimally using dynamic programming.

Our jump modeling framework is quite general. The structure of the model depends on the shape of the function that is minimized to obtain the model parameters, the way the model jumps depends on the function that is minimized to get the sequence of model activation. When we impose no constraints or penalty on the model sequence, our method reduces to automatically splitting the dataset in clusters and fitting one model per cluster, which is a generalization of -means [19, Algorithm 14.1]. Hidden Markov models (HMMs) are a special case of jump models, as we will show in the paper. Indeed, jump models have broader descriptive capabilities than HMMs, for example the sequence of discrete states may not be necessarily generated by a Markov chain and could be a deterministic function. Moreover, as stated above, jump models can have rather arbitrary model shapes.

After introducing jump models in Section 2 and giving a statistical interpretation of the loss function in Section 3, we provide algorithms for fitting jump models to data and to estimate output values and hidden modes from available input samples in Section 4, emphasizing differences and analogies with HMMs. Finally, in Section 5 we show four examples of application of our approach for regression and classification, using both synthetic and experimental data sets.

The code implementing the algorithms described in the paper is available at http://cse.lab.imtlucca.it/~bemporad/jump_models/.

1.1 Setting and goal

We are given a training sequence of data pairs , , with , . We refer to as the time or period, as the regressor or input, and as the outcome or output at time . The training sequence is used to build a regression model that provides a prediction of given the available inputs , and possibly past outputs . We are specifically interested in models where is not simply a static function of , but rather we want to exploit the additional information embedded in the temporal ordering of the data. As we will detail later, our regression model is implicitly defined by the minimization of a fitting loss that depends on and other variables and parameters. The chosen shape for determines the structure of the corresponding regression model.

Given a production data sequence , thought to be generated by a similar process that produced the training data, the quality of the regression model over a time period will be judged by the average true loss

(1)

where penalizes the mismatch between and , with for all .

2 Regression models

2.1 Single model

A simple form of deriving a regression model is to introduce a model parameter , a loss function , and a regularizer defining the fitting objective

(2a)
where , . For a given training data set , let
(2b)
be the optimal model parameter. By fixing and exploiting the separability of the loss in (2a) we get the following regression model
(2c)

where as the regression model, with ties in the arg min broken arbitrarily. For example, when we get the standard linear regression model .

Model (2) can be enriched by adding output information sets that augment the information that is available about ,

(3)

where if no extra information on is given. For example, if we know a priori that we can set equal to the nonnegative orthant.

2.2 K-models

Let us add more flexibility and introduce multiple model parameters , , and a latent mode variable that determines the model parameter that is active at step . Fitting a K-model on the training data set , entails choosing the models by minimizing

(4)

with respect to and . The optimal parameters define the -model

(5)

Note that the objective function in (4) is used to estimate the model parameters based on the entire training dataset, while (5) defines the model used to infer the output and discrete state given the input , as exemplified in the next section.

2.2.1 K-means and piecewise affine models

The standard -means model [19] is obtained by setting , , and

(6)

In this case, minimizing (4) assigns each datapoint to the cluster indexed by , and defines as the centroids of the resulting clusters. Moreover, the regression model defined by (6) returns

(7)

that is the index of the centroid which is closest to the given input , and sets as the best estimate of .

More generally, by setting

(8)

with and , we obtain a piecewise affine (PWA) model over the piecewise linear partition generated by the Voronoi diagram of , i.e., the regression model (5) becomes

(9)

The hyper-parameter in (8) trades off between fitting the output and clustering the inputs based on their mutual Euclidean distance.

A more general PWA model can be defined by setting

(10)

where defines a piecewise linear separation function that induces a polyhedral partition of the input space [6, 9]. In this case it is immediate to verify that the regression model induced by (5) is

(11)

2.3 Jump model

The models introduced above do not take into account the temporal order in which the samples are generated. To this end, we add a mode sequence loss in the fitting objective (4)

(12)

where is the mode sequence. We define in (12) as

(13a)
where , is the initial mode cost, is the mode cost, and is the mode transition cost. We discuss possible choices for in Sections 2.3.1 and 3.
With a little abuse of notation, we write
(13b)
where
(13c)

As with any model, the choice of the fitting objective (13) should trade off between fitting the given data and prior assumptions we have about the models and the mode sequence. In particular, the mode sequence loss in (13a) takes into account the temporal structure of the mode sequence, for example that the mode might change (i.e., ) rarely.

A jump model can be used for several tasks beyond inferring the values . In anomaly identification, we are interested in determining times for which the jump model does not fit the data point well. In model change detection we are interested in identifying times for which . In control systems jump models can be used to approximate nonlinear/discontinuous dynamics and design model-based control policies, state estimators, and fault-detection algorithms.

2.3.1 Mode loss functions

We discuss a few options for choosing the mode loss functions , , defining the mode sequence loss in (13a). As we assume that the number of possible modes must be fixed, must be chosen as a trade off between fitting the model to data ( large) and limit the complexity of the model and avoid overfitting ( small). The best value is usually determined after performing cross-validation.

As mentioned above, the case leads to a -model. By choosing for all , , one penalizes mode transitions equally by , where leads to regression of a single model on the data (that is, ), while leads again to a -model. Note that choosing the same constant for all transitions makes the fitting problem exhibit multiple solutions, as indexes , can be arbitrarily permuted. The mode loss can be used to break such symmetries. For example, smaller values for will be preferred by making for . The shape of the increasing finite sequence can be used to reduce the number of possible modes: larger increasing values of will discourage the use of an increasing number of modes.

The initial mode cost summarizes prior knowledge about the initial mode . For example, if no prior information on is available. On the contrary, if the initial mode is known and say equal to , then for and otherwise.

Next Section 3 suggests criteria for choosing in case statistical assumptions about the underlying process that generates are available. Alternative criteria are discussed in Section 4.4 for choosing directly from the training data.

3 Statistical interpretations

Let , , , . We provide a statistical interpretation of the loss functions for the special case in which the following modeling assumptions are satisfied:

  • The mode sequence , the model parameters and the input data are statistically independent, i.e.,

  • The conditional likelihood of is given by

    where is the likelihood of the outcome given and ;

  • The priors on the model parameters are all equal to , i.e.,

    and the model parameters are statistically independent, i.e.,

  • The probability of being in mode given is (Markov property);

  • The initial mode has probability .

Proposition 1.

Let Assumptions A1-A5 be satisfied and define

(14a)
(14b)
(14c)
(14d)
(14e)

Then minimizing as defined in (12)–(14) with respect to and is equivalent to maximizing the joint likelihood .

Proof. Because of the Markov property (Assumption A4), the likelihood of the mode sequence is

(15)

From (15) and Assumptions A1-A3, we have:

whose logarithm is

(16)

By defining the loss functions , , , , and as in (14), the minimization of the fitting objective as in (12)–(13) with respect to and is equivalent to maximizing the logarithm of the joint likelihood , and therefore .

The following proposition provides an inverse result, namely a statistical interpretation of minimizing a given generic defined as in (13).

Proposition 2.

Define the probability density functions

(17a)
(17b)

where

(18a)
(18b)

and assume that the outputs are conditionally independent given , i.e., . Then the following identity holds

(19)

Proof. Since

(20)

by substituting (18) in (20) we get

(21)

As the denominator in (21) does not depend on and , maximize is equivalent to maximize

or, equivalently, to minimize

The identity (19) thus follows from the definition of in (13).

The following corollary provides a set of probabilistic interpretations of the loss function , some of which are well known in Bayesian estimation.

Corollary 1.

Let in (18a) be a constant. Then the following statements hold:

  1. The quadratic regularization corresponds to assuming a Gaussian prior on , namely with .

  2. The quadratic penalty on the prediction error

    (22)

    correspond to assuming the probabilistic model of the output , with .

  3. Setting is equivalent to assuming that the modes are i.i.d., with

    Furthermore, setting corresponds to assuming that for all , while setting , , corresponds to assuming .

  4. Under the assumption , the case and for and for , corresponds to assume that

Proof. As does not depend on and , in (17b) can be written as , where

(23)

The results follow straightforwardly from the above expressions of and and the definition of in (13a).

4 Algorithms

We provide now algorithms for fitting a jump model to a given data set and to infer predictions , from it.

4.1 Model fitting

Given a training sequence of inputs and of outputs, for fitting a jump -model we need to attempt minimizing the cost with respect to and . A simple algorithm to solve this problem is Algorithm 1, a coordinate descent algorithm that alternates minimization with respect to and . If and are convex functions, Step 11 can be solved globally (up to the desired precision) by standard convex programming [7]. Step 12 can be solved to global optimality by standard discrete dynamic programming (DP) [2] with complexity . This is achieved by computing the following matrices of costs and of indexes

(24a)
(24b)
(24c)
(24d)
backwards in time, and then reconstructing the minimum cost sequence forward in time by setting
(24e)
(24f)

Note that if the time order of operations in (24) is reversed, the DP iterations (24) become Viterbi algorithm [29, p. 264]:

(25a)
(25b)
(25c)
followed by the backwards iterations
(25d)
(25e)

Since at each iteration the cost is non-increasing and the number of sequences is finite, Algorithm 1 always terminates in a finite number of steps, assuming that in case of multiple optima one selects the optimizers in Steps 11 and 12 according to some predefined criterion. However, there is no guarantee that the solution found is the global one, as it depends on the initial guess . To improve the quality of the solution, we may run Algorithm 1 times from different random initial sequences and select the best result. Our experience is that a small , say , is usually enough.

Input: Training data set , , number of models, initial mode sequence .  

  1. iterate for

    1. ;

    1. ;

  2. until .

    Output: Estimated model parameters and mode sequence .

Algorithm 1 Jump model fitting

During the execution of Algorithm 1 it may happen that a mode does not appear in the sequence . In this case, the fitting loss does not depend on , and the latter will be determined in Step 11 based only on the regularizer .

In case , the ordering of the training data becomes irrelevant and Algorithm 1 reduces to fitting models to the data set. If in addition and are specified as in (6) and , Algorithm 1 is the standard -means algorithm, where the starting sequence is the initial clustering of the data points , Step 11 computes the collection of cluster centroids at iteration , and Step 12 reassigns data points to clusters by updating their labels .

When again and the mode loss in (10) is used for getting a PWA model, the cost function minimized in Step 11 of Algorithm 1 is separable with respect to , . Then the minimization with respect to produces the piecewise linear separation function that defines the polyhedral partition of the input space [9], while Step 12 looks for the optimal latent variables that best trade off between assigning the corresponding data point to the polyhedron and matching the predicted output .

Finally, we remark that Algorithm 1 is also applicable to the more general case in which the mode loss also depends on , by simply replacing Steps 11 and 12 with

(26a)
(26b)

This would cover the case in which contains parameters to be estimated.

4.2 Inference

4.2.1 One-step ahead prediction

Assume that the model parameters have been estimated and that new production data and outputs are given. Because of the structure of the mode loss function defined in (13a), the estimates and do not depend on future inputs and modes for .

The same fitting objective (12) can be used to estimate and ,

(27)

where is a possible additional output information set and

Algorithm 2 attempts at solving problem (27) at every of interest. Step 1 is solved again by the DP iterations (24) over the time span , with the only difference that in (24a) we set the terminal penalty equal to , since the last output is determined later at Step 2.

Note that open-loop prediction, that is the task of predicting and without acquiring , can be simply obtained by replacing with . Arbitrary combinations of one-step ahead and open-loop predictions are possible to handle the more general case of intermittent output data availability.

Input: Model set , production data set , past outputs .  

  1. ;

    Output: Estimated output and mode sequence .

Algorithm 2 Inference

4.2.2 Recursive inference

When , problem (27) becomes completely separable and simplifies to

(28)

For example, in the case of -means (6) (), the estimate obtained by (28) is given by (7).

When the mode transition loss function , the simplification in (28) does not hold anymore. Nonetheless, an incremental version of (27) can be still derived as described in Algorithm 3, where is the arrival cost recursively computed by the algorithm from the initial condition , for all .

Input: Model , current input