Using movement primitive libraries is an effective means to enable robots to solve more complex tasks. In order to build these movement libraries, current algorithms require a prior segmentation of the demonstration trajectories. A promising approach is to model the trajectory as being generated by a set of Switching Linear Dynamical Systems and inferring a meaningful segmentation by inspecting the transition points characterized by the switching dynamics. With respect to the learning, a nonparametric Bayesian approach is employed utilizing a Gibbs sampler.
In recent years, problems aimed to be solved by robots have become more and more complex. These problems, which include challenging tasks such as rescuing humans in crisis zones  or playing table tennis , require the robot to act autonomously and infer the correct movements. In most of these complex scenarios, tasks cannot be precisely planned by hand, and thus, it has become worthwhile to let the robot learn the desired behavior from demonstration data. One well-known concept for learning is imitation learning, where a robot learns to perform tasks by imitating demonstrations provided by some teacher. By learning the trajectory as a whole may enable the robot to reproduce it, but this approach does not generalize well and fails to perform tasks that require variability and variance. To learn trajectories in a more versatile and generalizable manner, the trajectory can be divided into several subtasks which can be solved by means of elementary movements, referred to as movement primitives . The recently introduced ProbS algorithm  makes use of trajectories, which are for example collected from demonstrations, to build such a library. However, this algorithm requires a meaningful segmentation initialization of the trajectories. Simple heuristic methods for providing initializations, such as segmentation at zero-velocity points, were stated to only be meaningful in a very restricted set of problems .
The goal of our project is to provide a well-founded probabilistic approach for trajectory segmentation by means of Switching Linear Dynamical Systems (SLDS). The concept of SLDS is to model an observed time series as being generated by a set of different time-invariant dynamical systems. This method for trajectory segmentation can then be used as an adaptive way of providing an initial segmentation of the trajectory for the ProbS algorithm. In this work, we present a promising approach to apply the SLDS framework to the problem of partitioning trajectories by assuming the trajectory has been generated by such a generative model (see Figure 1). The preliminary goal is to learn the parameters of the dynamical subsystems in the SLDS framework while simultaneously learning the specific points of time where the switching occurs, the so-called transition points.
While there exist many methods for learning SLDS models, most of them come with the downside of assuming a fixed number of dynamical modes to apply an Expectation Maximization (EM) algorithm. To circumvent this problem, we utilize a nonparametric Bayesian approach and make use of the concept of Hierarchical Dirichlet Processes (HDP) within the SLDS framework which was initially introduced by Fox et al. [4, 5].
In Sec. 3, we lay out the theoretical foundations relevant to the understanding of the algorithm. In Sec. 4, we explain how our setup infers and samples the desired quantities of SLDS model. In Sec. 5 we analyze the algorithm by means of a toy problem, where we use a randomly generated trajectory with switched dynamics.
2 Related Work
For learning the SLDS model in general applications, the following quantities are unknown and have to be learned: The dynamical parameters, which will be formally introduced in the following sections, the modes indicating the dynamical system for a time step, and the internal states which represent the noise-free trajectory data. These quantities are highly coupled, making the process of learning a challenging task. Therefore, in most of the related work, assumptions are made to make these problems more efficient to solve. One common simplification to the problem is to include prior knowledge about the number of modes. Even in this case, inference in SLDS can be shown to be formally intractable , and because of that, standard forward-backward algorithms cannot be applied as in the related Hidden Markov Model (HMM) learning problems. As the modes and the internal states can be seen as latent variables, it was suggested [12, 14] to use an Expectation Maximization (EM) approach for learning. This approach consists of two steps being iterated in an alternating manner: During the Expectation step, one firstly obtains the expected values of the latent variables using a modified Kalman smoother. Secondly, the model parameters have to be estimated during the maximization step by maximizing the expected complete data log-likelihood. For performing inference, which is required for computing the expectations, there are principally two methods available: Expectation Propagation (EP)  and Generalized Pseudo Bayes (GPB) [13, 1]. With both approaches come certain limitations, e. g. requiring strict assumptions and the display of numerical instabilities. These, however, can be further relaxed by means of Expectation Correction (EC) . Oh et al.  extended the SLDS model further by introducing a parametric SLDS, introducing global parameters representing variations in motions of honey bees. These approaches were applied to various applications such as dancing honey bees, human motion recognition [16, 17] and even number recognition in voice recordings , where the number of switching states can be fixed. For the segmentation of general robot trajectories, however, it is disadvantageous to fix the number of modes. Generally, it is unknown how many movement primitives a trajectory consists of and how complex they are, and thus, the underlying number of modes varies in every case. Therefore, in this application scenario, more general methods need to be employed. Huang et al.  approach this problem by embedding the input and output data in a higher-dimensional space and they identify the points in time where switching occurs by segmenting the data into distinct subspaces. This approach, however, assumes deterministic dynamics which is usually not suitable in practice. Fox et al. [4, 5] opted for a nonparametric approach using Hierarchical Dirichlet Processes (HDPs) in order to learn the number of modes without prior specification of the overall number of modes within the system. In their work, the modes, internal states and model parameters are alternately sampled conditioned on all other quantities. The main drawback of the method is that it is computationally expensive and requires a large amount of tuning of the parameters.
In this section, we first formally introduce the SLDS model. Also, the theoretical foundations of the Dirichlet Process and the extension to the Hierarchical Dirichlet Process are explained. We also address the problem of high-frequent switching of the modes that comes with an unaltered HDP-approach. The section is then concluded by explaining a variation of a Kalman Filter that is used to smooth the input data.
We decided to investigate the HDP-based method by Fox et al. [4, 5] for our problem to segment robot trajectories. The main reason for this is that we want to alleviate the previously mentioned restriction of having to fix the number of modes beforehand. Fixing the number of modes would cause trajectories of different complexities to be represented with the same number of dynamical systems mostly leading either to an underestimation or an overestimated of the actual number of underlying modes. On the one hand, assuming too few dynamical systems would lead to a bad approximation of the trajectory and would need further subdivision. On the other hand, assuming a too granular segmentation,leading to a smaller approximation error at the expense of introducing a larger model complexity, would neither yield a meaningful segmentation because it would exhibit high-frequency switching behavior over short time periods. Furthermore, the HDP-based SLDS approach is described to be more robust to noise which is an important property when dealing with inherently noisy observations.
Switching Linear Dynamical Systems (SLDS) model time-discrete observations as being generated by a linear dynamical system whose dynamics switch over time (see Figure 2). The observations are generated by a linear transformation of internal states and additional Gaussian noise : . The state dynamics are given by with the state transition noise . designates the mode describing which dynamical matrix is applied at this time step. We assume for each mode a distribution for generating the mode of the following time step:
3.2 Dirichlet Process
In order to model the previously introduced distributions for the different dynamical modes, we use a Dirichlet Process (DP). The DP can be described as a measure on a measure , and it samples discrete probability measures from an underlying base measure . An explicit draw from this DP can be expressed as , where is given by and is the corresponding probability weight of the associated . The discrete probability measure for , denoted by , is defined as
A naive approach would be to sample a single probability distribution from the DP governing the distribution of the dynamical parameters for the entire time series. The mode-specific dynamical parameters would then be sampled each time step from :
As shown in , the DP draw can be expressed through stick-breaking construction , and therefore, an equivalent representation of the DP is given by
where is the parametrization of our model, which is given by . The base measure is the modelled distribution for the dynamical parameters and is discussed in greater detail in later chapters. Comparing the two representations outlined above, it becomes apparent that the observed dynamical parameters for each time step assume a specific with probability . Because of this nonparametric setup, the number of modes does not have to be fixed before hand.
Using the DP as outlined above to represent our mode transition probabilities, we face the problem of having the same probability distribution for every mode. We can enhance our model to draw a new probability measure in each time step. However, by assuming a continuous base measure , the probability distributions for each time step will now have no dynamical parameters in common, i. e. . Hence, every time step will have different dynamical parameters which defeats the purpose of our model to find transition points of dynamical modes. This problem can be overcome by introducing a discrete base measure resulting in a shared parameter space for each time step as discussed in the next chapter.
3.3 Hierarchical Dirichlet Process
So far, we could sample an entire mode probability distribution from our Dirichlet prior for every time step. The major drawback of this approach, as discussed previously, is that these draws will not share any similarities, since the entire parameter space for each probability measure drawn from a DP with a continuous base distribution will satisfy . Therefore, we extend our model to a Hierarchical Dirichlet Process, allowing for shared similarities across multiple modes. To do so, we model our base distribution as a probability measure drawn from another Dirichlet process . Our intermediate probability draws are then again draws from the intermediate distribution , given by . Also, we do not use a different distribution for each time step, but rather assign an indicator variable to every time step that represents the current mode of the system. Thus, every mode has its specific mode transition probability distribution along with its associated dynamical parameters . The modified model is now given by
One can integrate out the intermediate distribution , as shown in  and use the stick-breaking representations and to get an equivalent representation of the HDP given by
with . Here, describes the dynamical parameters for the current time step and current mode, respectively, whereas describes the entire underlying family of dynamical modes available to be drawn from. It can be easily seen that because all now share the same support points from the discrete parental distribution .
With regards to practical implementations, the infinite mixture model depicted by the HDP has to be approximated. Suppose we have a mixture model with mixture components and mixing weights . This finite mixture model has the Dirichlet distribution as a conjugate prior. As laid out in , this Dirichlet distribution is an equivalent representation of a DP if . Thus, the mixture weights of the finite mixture model can adequately represent a DP if is chosen sufficiently big. Using this so-called weak limit approximation, the probability weights of our HDP can now be expressed as
This representation is useful as it allows to draw weights from a finite probability distribution while also providing an efficient means of computing the posterior probability for later inference.
3.4 Sticky Extension for HDP
Introducing the HDP prior enables our model to have recurring dynamical modes across multiple times steps. Furthermore, as shown in , the expectation of the transition probabilities is identical for all mode-specific distributions:
The main problem now is that the current setup displays fast switching between all time steps. This comes to no surprise since the mode-specific transition distributions currently do not favor self-transitions. Keeping our goal in mind, namely finding a segmentation for a demonstration task, we have to account for high-probability self-transitions; otherwise, a meaningful segmentation is almost impossible to be obtained. Therefore, in a similar manner to , we modify the mode-specific transition distributions with a stickiness-parameter as follows:
The parameter represents the probability for a self-transition . If , the original non-sticky HDP is recovered. Incorporating into the weak limit approximation yields
3.5 Kalman Filtering
The observed trajectory data is inherently noisy. Therefore, we utilize a Kalman filter to smooth the observed data to estimate the internal states of the SLDS model. The Kalman filter provides a recursive algorithm for estimating the underlying state of a linear-Gaussian state space model in SLDS given a set of observations and fixed model parameters. It can be used to predict quantities making use of both the noisy observations and the model knowledge. Applying the Kalman filter to the observations or, more specifically, the recorded trajectories of the SLDS, we preliminarily assume that the dependency between and is a linear transformation with added Gaussian noise, that is with (see section 3.1), where we set to model the observations simply as a noisy representation of the hidden states. We employ a combination of a forward and backward Kalman filter, which uses the previously sampled states together with the sampled dynamical matrices as well as the observations , to predict the states for the respective current iteration. This variant of the Kalman filter is also known as the Rauch-Tung-Striebel Kalman smoother .
4 Inference and Learning in SLDS
Inference in SLDS can be shown to be formally intractable , and therefore, we have to resort to sampling methods. We employ Gibbs sampling to sample the variables of interest, namely the state sequence , the mode sequence and the dynamical parameters described by the parametric family , where is the overall number of different dynamical modes estimated to compose our system. Note that , where is the attained number of modes over the entire time series while merely represents the truncated limit of the maximum number of modes that can possibly occur. Additionally, we update and sample from the posterior of the underlying transition probability distribution , the posterior of the mode-specific transition probability distributions and the posterior of the hyperparameters , and . In general, the process can be described by the following steps:
Initialize all values
Sequential sampling of
Block sampling of
Block sampling of
Sampling of and all
Sampling of hyperparameters , , , and
Sampling of dynamical parameters and
4.1 Gibbs Sampling
In order to infer the transition probabilities along with the dynamical parameters and the state sequence , we are sampling from their posterior distributions given the respective other quantities and incorporating a prior. Apart from the fact that deriving the MAP solution is analytically intractable, sampling from the posterior allows us to determine the expected value rather than a single best estimate which is a more appropriate measure for our application. For sampling we resort to Gibbs sampling.
Gibbs sampling allows to sample from the high-dimensional joint distribution by iteratively sampling from the conditional distributions of the quantities individually. A more detailed introduction to Gibbs samplers as well as implementation details can be found in .
4.2 Block Sampling of Mode Sequence
As shown in , the probability distribution of , conditioned on the states and the set of dynamical parameters , is given by
The Marcovian structure of our model makes dependent only on the previous mode value through the transition probability . Recalling that and , the entire joint auxiliary sampler for can be described as
where the mode sequence can be computed recursively starting at . The backward messages can also be recursively computed with
Note that sampling each recursively is equivalent to joint sampling due to the Marcovian structure of our model. By using a Dirichlet Process approximation for our model and taking advantage of its clustering property, only a small fraction of the maximum number of modes will be attained.
4.3 Block Sampling of State Sequence
Conditioned on the mode sequence and the set of dynamical parameters , the entire system degenerates into a simple linear dynamical system with switching dynamical parameters. As seen in , the state sequence can be jointly sampled utilizing a Kalman filter based approach. The probability of sampling conditioned on the prior state is given by
Recalling the SLDS setup of our system, this probability can be equivalently expressed as
The backward messages are given by
being initialized with . Here, the so-called information parameters and can be recursively calculated prior to resampling since they only depend on the given observation sequence and the conditioned values for and ; refer to  for implementation details.
Since the entire probability distribution is a simple product of Gaussian distributions, it can be further simplified using to yield a single Gaussian distribution for the block sampler :
As can be seen, the entire state sequence can be recursively sampled starting at . are generated from the backward filter. They contain the marginalized observations . Due to the Markov property of our system, recursively sampling the state sequence is equivalent to jointly sampling the entire state sequence.
4.4 Sequential Sampling of Mode Sequence
In order to improve the mixing rate of the Gibbs sampler, it has been shown to be fruitful  to interleave an sequential sampling of from the distribution prior to block sampling . The implementation follows in principle the block sampling of by applying a forward and backward Kalman filter. By conditioning on all other modes except for the mode of the current time step , and marginalizing over , we obtain the distribution
Also, by marginalizing over we get
In principle, these distributions are equivalent to the backward and forward messages for the current time step. We combine these messages with the local likelihood of the current observation and the transition probability . Finally, we obtain the desired distribution by marginalizing over and and a subsequent multiplication with the probabilities of transitioning to from (given by ) and from to (given by ). The analytical expression for sampling is then given by
The information matrices and are obtained by running a forward and backward Kalman filter as mentioned above.  shows explicit derivations for how to obtain these matrices.
4.5 Sampling Dynamical Parameters
Conditioned on the current mode sequence and state sequence , we can sample appropriate dynamic parameters and under incorporation of a prior. To analyze the posterior distribution of the dynamic parameters, it is suitable to write the dynamic equation in the form . Equivalently, we can state the equation in a linear regression form if we form for every a matrix with columns consisting of the state vectors for that . In the same way, we construct a matrix consisting of the state vectors of the respective previous time steps, i. e. of the state vectors for which . Subsequently, we can formalize our problem as
with being a matrix consisting of Gaussian noise.
For inferring the dynamical parameters, we set a matrix-normal prior on and a inverse-Wishart prior on the covariance matrices . This is a common choice in multivariate linear regression problems as the priors are conjugate in this case.
The matrix normal distribution is characterized by the probability density function
with mean matrix and scale matrices , .
To bring the posterior distribution of the dynamical parameters in a form suitable for incorporating priors, we decompose it in the following way:
The quantity can be derived  in closed form by applying Bayes’ formula using a prior and data likelihood which are both matrix-normal distributed:
with and . and are parameter matrices determining the matrix-normal portion of the prior.
To obtain , we combine the matrix-normal distributed likelihood with a conjugate inverse-Wishart prior . is the scaling matrix and represents the degrees of freedom of the inverse-Wishart prior. The posterior is derived  to be
with , where and , defined as above.
4.6 Sampling of State Transition Distributions and Global Transition Distribution
Lastly, the mode transition probabilities are re-sampled using their respective posterior distributions. The global transition distribution is re-sampled from the posterior that is given by
The values for represent the number of transitions from mode to mode that were caused by a transition from a so-called informative table also taking into account self-transition overrides. To fully understand this terminology, an introduction to the Chinese Restaurant Franchise Process is required. However, this would extend the scope of this paper and can therefore be studied in more detail in  or . The state transition distribution for each mode is re-sampled from the posterior distribution given by
Here, represents the number of transitions from mode to mode over the entire time series.
We have implemented the Gibbs sampler for estimating the modes, internal states and dynamics as described in section 4 and evaluated its components in a toy setup.
The data for the toy problem is generated assuming a 2-dimensional state space and using the three randomly generated dynamical transition matrices
as well as the state transition noise matrices
This setup could be seen as observing two joint angles, for example, while trying to infer likely switching points. The state sequence was initialized with and generated following the equations of the Switching Linear Dynamical System (see section 3.1) with a pre-defined mode setup (see Figure 4 and 5).
The initial SLDS parameters are sampled from the according priors:
and can be calculated from the deterministic relationships
The hyperparameters of the sampler are set to , , , , , and . These values allow for sufficiently spread out probability distributions () and (), and also account for a rather heavy sticky component (). The sampler itself is initialized with the previously defined priors. The initial and distributions are generated with a uniform Dirichlet prior with the given hyperparameters. The mode sequence is initialized according to the corresponding sample indicated by the previously sampled mode assignment . The dynamical matrices are initialized by sampling from the according prior as described in section 4.5.
The system was run with the given setup for 105 iterations and the best sample of the last five iterations was taken. The resulting segmentation is shown in Figure 4 and the underlying mode sequence in Figure 6. As can be seen, the sampler is able to match almost every mode correctly.
The nonparametric part comes into play in the sense that we initialized the truncation level of the Dirichlet distribution used to approximate the DP with . Still, the sampler correctly inferred that only three modes are needed to sufficiently describe the given dynamical system. The sampler was only able to recover distributions with an enforced self-transition probability due to the introduction of the sticky parameter . To show the impact of the sticky property, the algorithm was run on the same trajectory with the same setup, but this time with setting . The result is shown in Figure 7.
It can be seen that the sampled mode sequence now displays high-frequency switching between the modes which is undesirable since the overarching goal is to specifically find transition points to yield a trajectory segmentation. Fast switching modes do not provide sufficient temporal continuity to offer any meaningful segmentation for a given trajectory.
In our project, we have implemented an algorithm for learning an SLDS model to infer a meaningful segmentation of trajectories. This segmentation can be used, e.g., as an initial segmentation for algorithms such as ProbS, obviating the need to rely on simple heuristic methods. We have followed a nonparametric approach to infer the number of modes in the system and utilized a Hierarchical Dirichlet Process  to introduce shared dynamical parameters among all modes. Furthermore, we have used the sticky extension introduced by Fox et al.  to avoid high-frequent switching behavior. As learning an SLDS model in closed form has shown to be intractable, we have employed a Gibbs sampler iteratively estimating the quantities of the model following . The modes , states , the hyperparameters and dynamical parameters are alternately sampled to provide an estimate for the joint distribution of all variables for the SLDS after a burn-in phase.
The algorithm was tested in a toy example to provide a segmentation for a low-dimensional trajectory generated by an SLDS. The parameters of the system were tried to be recovered by applying the algorithm to a noisy observation of the generated trajectory. After a short burn-in phase, the algorithm was able to infer nearly every switching point correctly.
The results of the experiment show that the nonparametric SLDS model is a potential alternative for segmenting demonstration trajectories by finding appropriate mode transition points. However, tuning and optimization of the hyperparameters proved to be a rather tedious and unintuitive task. In the future, we want to synthesize the nonparametric segmentation approach with the ProbS algorithm by Lioutikov et al.  in order to boost the performance by providing meaningful initialization segmentations.
- footnotetext: equal contribution
- (1993) Estimation and tracking- principles, techniques, and software. Norwood, MA: Artech House, Inc, 1993.. Cited by: §2, §4.
- (2006) Expectation correction for smoothed inference in switching linear dynamical systems. Journal of Machine Learning Research 7 (Nov), pp. 2515–2540. Cited by: §2.
- (2010) Bayesian nonparametric inference of switching linear dynamical systems. In LIDS, pp. 9. Cited by: §3.5, §4.3, §4.4, §4.6, §6.
- (2009) Nonparametric bayesian learning of switching linear dynamical systems. In Advances in Neural Information Processing Systems, pp. 457–464. Cited by: §1, §2, §3.4, §3.
- (2011) Bayesian nonparametric inference of switching dynamic linear models. IEEE Transactions on Signal Processing 59 (4), pp. 1569–1585. Cited by: §1, §2, §3.4, §3, §4.2, §4.3, §4.5, §4.5.
- (2004) Identification of hybrid linear time-invariant systems via subspace embedding and segmentation (ses). In Decision and Control, 2004. CDC. 43rd IEEE Conference on, Vol. 3, pp. 3227–3234. Cited by: §2.
- (2015) Probabilistic segmentation applied to an assembly task. In Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, pp. 533–540. Cited by: §1, §6.
- (2004) Rescue robots and systems in japan. In Robotics and Biomimetics, 2004. ROBIO 2004. IEEE International Conference on, pp. 12–20. Cited by: §1.
- (2007) Switching linear dynamical systems for noise robust speech recognition. IEEE Transactions on Audio, Speech, and Language Processing 15 (6), pp. 1850–1858. Cited by: §2.
- (2001) A family of algorithms for approximate bayesian inference. Ph.D. Thesis, Massachusetts Institute of Technology. Cited by: §2.
- (2010) Learning table tennis with a mixture of motor primitives. In Humanoid Robots (Humanoids), 2010 10th IEEE-RAS International Conference on, pp. 411–416. Cited by: §1.
- (1998) Switching kalman filters. Technical report Citeseer. Cited by: §2.
- (1998) Learning switching kalman filter models. Compaq Cambridge Research Lab Tech Report 98 (10). Cited by: §2.
- (2013) Learning outcome-discriminative dynamics in multivariate physiological cohort time series. In Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE, pp. 7104–7107. Cited by: §2.
- (2005) Learning and inference in parametric switching linear dynamic systems. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, Vol. 2, pp. 1161–1168. Cited by: §2.
- (2000) Learning switching linear models of human motion. In NIPS, Vol. 2, pp. 4. Cited by: §2.
- (2005) A data-driven approach to quantifying natural human motion. In ACM Transactions on Graphics (TOG), Vol. 24, pp. 1090–1097. Cited by: §2.
- (2004) Monte carlo methods. Wiley Online Library. Cited by: §4.1.
- (2005) Learning movement primitives. In Robotics Research. The Eleventh International Symposium, pp. 561–572. Cited by: §1.
- (1994) A constructive definition of dirichlet priors. Statistica sinica, pp. 639–650. Cited by: §3.2.
- (2004) Sharing clusters among related groups: hierarchical dirichlet processes.. In NIPS, pp. 1385–1392. Cited by: §3.2, §3.3, §4.6, §6.