Robust MPC for tracking of nonholonomic robots with additive disturbances\thanksreffootnoteinfo

Robust MPC for tracking of nonholonomic robots with additive disturbances\thanksreffootnoteinfo

[    [    [    [    [

In this paper, two robust model predictive control (MPC) schemes are proposed for tracking control of nonholonomic systems with bounded disturbances: tube-MPC and nominal robust MPC (NRMPC). In tube-MPC, the control signal consists of a control action and a nonlinear feedback law based on the deviation of the actual states from the states of a nominal system. It renders the actual trajectory within a tube centered along the optimal trajectory of the nominal system. Recursive feasibility and input-to-state stability are established and the constraints are ensured by tightening the input domain and the terminal region. While in NRMPC, an optimal control sequence is obtained by solving an optimization problem based on the current state, and the first portion of this sequence is applied to the real system in an open-loop manner during each sampling period. The state of nominal system model is updated by the actual state at each step, which provides additional a feedback. By introducing a robust state constraint and tightening the terminal region, recursive feasibility and input-to-state stability are guaranteed. Simulation results demonstrate the effectiveness of both strategies proposed.

BIT]Zhongqi Sun, BIT]Li Dai, BIT]Kun Liu, BIT]Yuanqing Xia, KTH]Karl Henrik Johansson

School of Automation, Beijing Institute of Technology, Beijing 100081, China 

ACCESS Linnaeus Centre and School of Electrical Engineering, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden 

Key words:  Robust control; Model predictive control (MPC); Nonholonomic systems; Bounded disturbances.


11footnotetext: This paper was not presented at any IFAC meeting. Corresponding author Y. Xia. Tel. +86 10 68914350. Fax +86 10 68914382.

1 Introduction

Tracking control of nonholonomic systems is a fundamental motion control problem and has broad applications in many important fields such as unmanned ground vehicle navigation [1]; multi-vehicle cooperative control [2]; formation control [3]; and so on. So far, many techniques has been developed for control of nonholonomic robots [4, 5, 6, 7, 8]. However, these techniques either ignore the mechanical constraints, or require the persistent excitation of the reference trajectory, i.e., the linear and angular velocity must not converge to zero [9]. Model predictive control (MPC) is widely used in constrained systems. By solving a finite horizon open-loop optimization problem on-line based on the current system state at each sampling instant, an optimal control sequence is obtained. The first portion of the sequence is applied to the system at each actuator update [10]. MPC for tracking of noholonomic systems was studied in [2, 9, 11, 12], where the robots were considered to be perfectly modeled. However, when the system is uncertain or perturbed, then stability and feasibility of such MPC may be lost. In the absence of constraints and uncertainties, the optimal predictive control sequence obtained by MPC is identical to that obtained by dynamic programming (DP), which provides an optimal feedback policy or sequence of control laws [13]. Considering that feedback control is superior to open-loop control in the aspect of robustness and that DP cannot deal with the constrained systems, design methods for MPC with robust guarantees is an urgent demand for the tracking of constrained nonholonomic systems.

There are several design methods for robust MPC. One of the simplest approaches is to ignore the uncertainties and rely on the inherent robustness of deterministic MPC [14, 15], in which an open-loop control action solved on-line is applied recursively to the system. However, the open-loop control during each sampling period may degrade the control performance even render the system unstable. Hence, feedback MPC was proposed in [16, 17, 18, 19], in which a sequence of feedback control laws is obtained by solving an optimization problem. The determination of a feedback policy is usually prohibitively difficult. To overcome this difficulty, it is intuitive to focus on simplifying approximations by, for instance, solving a min-max optimization problem on-line [17, 18, 19, 20, 21, 22]. Min-max MPC provides a conservative robust solution for systems with bounded disturbances by considering all possible disturbances realizations. It is in most cases computationally intractable to achieve such feedback laws, since the computational complexity of min-max MPC grows exponentially with the increase of the prediction horizon.

Tube-MPC taking advantage both open-loop and feedback MPC was reported in [23, 24, 25, 26, 27, 28, 29]. Here the controller consists of an optimal control action and a feedback control law. The optimal control action steers the state to the origin asymptotically, and the feedback control law maintains the actual state within a “tube” centered along the optimal state trajectory. Tube-MPC for linear systems was advocated in [23, 24, 25], where the center of the tube was provided by employing a nominal system and the actual trajectory was restricted by an affine feedback law. It was shown that the computational complexity is linear rather than exponential with the increase of prediction horizon. The authors of [26] took the initial state of the nominal system employed in the optimization problem as a decision variable in addition to the traditional control sequence, and proved several potential advantages of such an approach. Tube-MPC for nonlinear systems with additive disturbances was studied in [27, 28], where the controller possessed a similar structure as in the linear case but the feedback law was replaced by another MPC to attenuate the effect of disturbances. Two optimization problems have to be solved on-line, which increases the computation burden.

In fact, tube-MPC provides a suboptimal solution because it has to tighten the input domain in the optimization problem, which may degrade the control capability. It is natural to inquire if nominal MPC is sufficiently robust to disturbances. A robust MPC via constraint restriction was developed in [24] for regulation of discrete-time linear systems, in which asymptotic state regulation and feasibility of the optimization problem were guaranteed. In [30], a robust MPC for discrete-time nonlinear system using nominal predictions was presented. By tightening the state constraints and choosing a suitable terminal region, robust feasibility and input-state-stability was guaranteed. In [31], the authors designed a constraint tightened in a monotonic sequence in the optimization problem such that the solution is feasible for all admissible disturbances. A novel robust dual-mode MPC scheme for a class of nonlinear systems was proposed in [32], the system of which is assumed to be linearizable. Since the procedure of this class of robust MPC is almost the same as nominal MPC, we call this class of robust MPC as nominal robust MPC (NRMPC) in this paper.

Robust MPC for linear systems is well studied but for nonlinear systems is still challenging since it is usually intractable to design a feedback law yielding a corresponding robust invariant set. Especially, the study of robust MPC for nonholonomic systems remains open. Motivated by the analysis above, this paper focuses on the design of robust MPC for tracking of nonholonomic systems with coupled input constraint and bounded additive disturbances. We discuss two robust MPC schemes introduced above. First, a tube-MPC strategy with two degrees of freedom is developed, in which the nominal system is employed to generate a central trajectory and a nonlinear feedback is designed to steer the system trajectory of actual system within the tube for all admissible disturbances. Recursive feasibility and input-to-state stability are guaranteed by tightening the input domain and terminal constraint via affine transformation and all the constraints are ensured. Since tube-MPC sacrifices optimality for simplicity, an NRMPC strategy is presented, in which the state of the nominal system is updated by the actual one in each step. In such a way, the control action applied to the real system is optimal with respect to the current state. Input-to-state stability is also established by utilizing the recursive feasibility and the tightened terminal region.

The remainder of this paper is organized as follows. In Section 2, we outline the control problem and some preliminaries. Tube-MPC and NRMPC schemes are developed in Section 3 and Section 4, respectively, for tracking of nonholonomic systems . In Section 5, Simulation results are given. Finally, we summarize the works of this paper in Section 6.

Notation: denotes the real space and denotes the collection of all positive integers. For a given matrix , denotes its 2-norm. denotes the diagonal matrix with entries . For two vectors and , means and denotes its absolute value. is the Euclidean norm. -weighted norm is denoted as , where is a positive define matrix with appropriate dimension. Given two sets and , , and , where is a matrix with appropriate dimensions.

2 Problem formulation and preliminaries

In this section, we first introduce the kinematics of the nonholonomic robot and deduce the coupled input constraint from its mechanical model. Then, we formulate the tracking problem as our control objective, and finally give some preliminaries for facilitating the development of our main results.

2.1 Kinematics of the nonholonomic robot

Consider the nonholonomic robot described by the following unicycle-modeled dynamics:


where is the state, consisting of position and orientation , and is the control input with the linear velocity and the angular velocity .

Fig. 1: The structure of the nonholonomic robot

The structure of the nonholonomic robot is shown in Fig. 1. is half of the wheelbase, and are the velocities of the left and the right driving wheels of the robot, respectively. Denote by the head position which is the point that lies a distance along the perpendicular bisector of the wheel axis ahead of the robot and is given by


The nominal system of the head position is then formulated as


It is assumed that the two wheels of the robot possess the same mechanical properties and are bounded by and , where is a known positive constant. The linear and angular velocities of the robot are presented as


As a consequence, the control input should satisfy the constraint , where


with .

2.2 Control objective

Our control objective is to track a reference trajectory in a global frame . The reference trajectory, which can be viewed as a virtual leader, is described by a reference state vector with and a reference control signal . The reference state vector and the reference control signal are modeled as a nominal unicycle robot


The follower to be controlled is also an unicycle with kinematics (2.1). Considering the existence of nonholonomic constraint, we consider its head position modeled as (2.1). Furthermore, the robot is assumed to be perturbed by a disturbance caused by sideslip due to the road ride. Therefore, we consider disturbances acting on the linear velocity while neglecting disturbances acting on the angular velocity. The perturbed head position kinematics is then formulated as follows:


where is the state with the head position , is the control input, and , , is the external disturbances, which is bounded by .

Fig. 2: Leader-follower configuration

Construct Frenet-Serret frames and for the virtual leader and the follower, respectively. They are moving frames fixed on the robots (see Fig. 2). The tracking error with respect to the Frenet-Serret frame is given by


where is the rotation matrix.

Taking the derivative of the tracking error yields


Based on the discussion above, we will design robust MPC strategies to drive the tracking error to a neighborhood of the origin. Note that the tracking system (2.2) involves the disturbances but the future disturbances cannot be predicted in advance. We will formulate the MPC problem only involving the nominal system.

To distinguish the variables in the nominal system model from the real system, we introduce as a superscript for the variables in the nominal system. From the perturbed system (7), the nominal dynamics can be obtained by neglecting the disturbances as


where, similarly, is the state of the nominal system with the position and orientation , and is the control input of the nominal system. The tracking error dynamics based on the nominal system is then given by


where is the input error and is given by


Define , with , the time sequence at which the open-loop optimization problems are solved. The MPC cost to be minimized is given by


in which represents the stage cost with the positive define matrices and , is the terminal penalty, and is the prediction horizon satisfying , .

2.3 Preliminaries

Some definitions and lemmas used in the following sections are summarized as follows.

Definition 1

For the nominal tracking error system (12), the terminal region and the terminal controller are such that if , then, for any , by implementing the terminal controller , it holds that

Definition 2

([33]) System (2.2) is input-to-state stable (ISS) if there exist a function and a function such that, for , it holds that

Definition 3

([13]) A function is called an ISS-Lyapunov function for system (2.2) if there exist functions , , and a function such that for all

Remark 1

It should be mentioned that both Definition 2 and Definition 3 result in input-to-state stability, which implies that the tracking error vanishes if there is no disturbance.

The following lemma provides a terminal controller and the corresponding terminal region for the nominal error system (12).

Lemma 1

For the nominal tracking system (12), let with , , and . Then is a terminal region for the controller


with the parameters satisfying and , .

Proof.  First, consider the terminal controller

which implies if .

Next, choose as Lyapunov function. The derivative of with respect to yields

which means that is invariant by implementing the terminal controller, i.e., holds for all once .

Finally, for , it follows that


Since and , , the inequality holds.

Hence, from Definition 1, is a terminal region associated with the terminal controller .    

The nominal system (2.1) is Lipschitz continuous and a corresponding Lipschitz constant is given by the following lemma.

Lemma 2

System (2.1) with is locally Lipschitz in with Lipschitz constant , where is the max wheel speed.

Proof.  Considering the function values of at and with the same , we have

where the mean value theorem and Lagrange multiplier method are used in the last inequality. The maximum of , subject to , can be obtained by setting and . From the results above, we conclude that



3 Tube-MPC

In this section, a tube-MPC policy is developed, which consists of an optimal control action obtained by solving an optimization problem and a feedback law based on the deviation of the actual state from the nominal one. The controller forces the system state to stay within a tube around a sensible central trajectory. The cental trajectory is determined by the following optimization problem.

Problem 1

where with , and .

Solution of Problem 1 yields the minimizing control sequence for the nominal follower system over the interval :


as well as the corresponding optimal trajectory:


The robust controller for the follower over the interval is designed as


where , , , , is the feedback gain, is the first control action of the optimal control sequence, and and are the first portion of the optimal position and orientation, respectively.

Based on this control strategy, the procedure of tube-MPC is summarized in Algorithm 1.

1:At time , initialize the nominal system state by the actual state .
2:At time , solve Problem 1 based on nominal system to obtain the optimal control sequence .
3:Calculate the actual control signal for the real system .
4:Apply the first portion of the sequence, i.e., , to the nominal system, and apply to the real system during the sampling interval .
5:Update the state of the nominal system with and the state of the real system with .
6:Update the time instant and go to step 2.
Algorithm 1 Tube-MPC
Remark 2

Since the optimization problem is solved on-line at each step and the first optimal control action is employed to generate the control policy together with the feedback law, the computational complexity is determined by the nominal system. Hence, the scheme has the same computational complexity as the deterministic MPC.

Remark 3

Due to the nonlinearity and nonholonomic constraint of the system, the optimal control action and the feedback law are combined in a different manner compared to linear systems [23, 24, 25]. This increases the difficulty of determining the tightened input constraint set such that holds. The scheme is also different from the existing works on nonlinear systems as in [27] and [28], in which our feedback law determined off-line is replaced with an online computation of another MPC. Hence, two optimization problems have to be solved in each step, which increases the computational burden.

Remark 4

From Algorithm 1, it can be observed that the optimization problem employs only the nominal system and thus the predictive optimal trajectory is independent of the actual state except for the initial one. From this point, the central trajectory of the tube can be calculated in a parallel or even off-line way if the initial state is known a priori. In such a way, only one feedback law is required to be calculated on-line, which reduces the on-line computational burden even further.

Before stating the main results of tube-MPC, the following lemma is given to show that the feedback law renders the difference between the minimizing trajectory and the actual trajectory bounded while guaranteeing the satisfaction of the input constraint.

Lemma 3

For the tracking control system (2.2) with controller (31), it follows that

  1. the state of the real system lies in the tube , where ;

  2. the input constraint is satisfied, i.e., .

Proof.  Denote the deviation of the actual trajectory from the optimal trajectory as


Taking the derivative of (32) yields


Substituting (31) into (33), we can conclude that


of which the solution is given by


By the initialization stage (28) and the upper-bound of the disturbances, it follows that


Consequently, , where the set is defined by


We further define as


From (32) and , we have


i.e., the trajectory lies in the tube .

For (ii), redefine the control input as


It can be observed that is an affine transformation, which is equivalent to scaling () by and rotate () by . Thus, to prove if is equivalent to show if for every admissible and . The sets and are defined as follows: