Decentralized Event-triggered Control over WSAN

Decentralized event-triggered control
over wireless sensor/actuator networks

Manuel Mazo Jr and Paulo Tabuada

In recent years we have witnessed a move of the major industrial automation providers into the wireless domain. While most of these companies already offer wireless products for measurement and monitoring purposes, the ultimate goal is to be able to close feedback loops over wireless networks interconnecting sensors, computation devices, and actuators. In this paper we present a decentralized event-triggered implementation, over sensor/actuator networks, of centralized nonlinear controllers. Event-triggered control has been recently proposed as an alternative to the more traditional periodic execution of control tasks. In a typical event-triggered implementation, the control signals are kept constant until the violation of a condition on the state of the plant triggers the re-computation of the control signals. The possibility of reducing the number of re-computations, and thus of transmissions, while guaranteeing desired levels of control performance, makes event-triggered control very appealing in the context of sensor/actuator networks. In these systems the communication network is a shared resource and event-triggered implementations of control laws offer a flexible way to reduce network utilization. Moreover reducing the number of times that a feedback control law is executed implies a reduction in transmissions and thus a reduction in energy expenditures of battery powered wireless sensor nodes.

M. Mazo Jr is with INCAS, Assen and the Department of Discrete Technology and Production Automation, University of Groningen, The Netherlands,
P. Tabuada is with the Department of Electrical Engineering, University of California, Los Angeles, CA 90095-1594,

1. Introduction

For many years, control engineers have designed their controllers as if there were infinite-bandwidth, noise- and delay-free channels between sensors, controllers, and actuators. The effects of non-idealities in the channels, in practice, could be mitigated by employing better hardware. However, on implementations over Wireless Sensor Actuator Networks (WSAN) these limitations of the communication medium can no longer be neglected. This fact, combined with the recent interest from industry, e.g. the WirelessHART initiative [1], have fueled the study of control under communication constraints in the past decade. Much research has been devoted to the effects of: quantization in the sensors; delay and jitter; limited bandwidth; or even packet losses. Some good overviews of these topics can be found in the report resulting from the RUNES project [2], and the special issue of the IEEE proceedings [3].

One aspect common to most modern control systems, and something assumed in most of the studies mentioned above, is the implementation of control strategies in embedded microprocessors. But in controlling the physical world, which is of continuous nature, the use of microprocessors brings a new question: how often should we sample the physical environment [4]? Many researchers have worked on the analysis of this sole problem. Tools like the delta-transform [5] were developed, and many books discussed this issue [6, 7]. More recently, Nesic and collaborators have proposed techniques to select periods retaining closed-loop stability in networked systems [8, 9]. However, engineers still rely mostly on rules of thumb such as sampling with a frequency 20 times the system bandwidth, and then check if it actually works [4, 6, 7]. A shift in perspective was brought by the notion of event-triggered control [10], [11]. In event-triggered control, instead of periodically updating the control input, the update instants are generated by the violation of a condition on the state of the plant. Many researchers have shown a renewed interest on these techniques [12, 13, 14, 15, 16, 17]. Recently, one of the authors proposed a formalism to generate asymptotically stable event-triggered implementations of nonlinear controllers [18], and in [19] the authors explored the application of event-triggered and self-triggered techniques to distributed implementations of linear controllers. For more details about these event-triggered and self-triggered techniques we refer the reader to [20] and [21]. Following the formalism in [18], Wang and Lemmon proposed a distributed event-triggered implementation for weakly-coupled distributed systems [22]. The present work complements the techniques described in [22] by addressing systems without weak-coupling assumptions.

The main contribution of this paper is a strategy for the construction of decentralized event-triggered implementations over WSAN of centralized controllers. The event-triggered techniques introduced in [18] are based on a criterion that depends on the norm of the vector of measured quantities. This is natural in the setting discussed in [18] since sensors were collocated with the micro-controller. However, in a WSAN the physically distributed sensor nodes do not have access to all the measured quantities. Hence, we cannot use the same criterion to determine when the control signal should be re-computed. Using classical observers or estimators (as the Kalman filter) would require filters of dimension as large as the number of states in each sensor node, which would be unpractical given the low computing capabilities of sensor nodes. Moreover, we do not assume observability from every measured output, thus ruling out observer-based techniques. Approaches based on consensus algorithms are also unpractical as they require large amounts of communication and thus large energy expenditures by the sensor nodes. Instead, we present an approach to decentralize a centralized event-triggered condition that relies only on the locally measured quantities. Our technique also provides a mechanism to enlarge the resulting times between controller re-computations without altering performance guarantees.

We do not address in this paper practical issues such as delays or jitter in the communication and focus solely on the reduction of the actuation frequency (with its associated communication and energy savings). In particular, the issue of communication delays has been shown to be easily addressed in the context of event-triggered control in [18] and similarly in [22]. The approach followed in those papers is applicable to the techniques introduced in this paper. Moreover, our techniques can be implemented over the WirelessHART standard [1], which addresses other communication concerns such as medium access control, power control, and routing.

The present paper is organized as follows: we introduce basic notation in Section 2; Section 3 states the problem, briefly reviews the results of [18] and presents our proposal for decentralization; the paper finalizes with an example in Section 4 and a discussion in Section 5.

2. Notation

We denote by the natural numbers, by , by the positive real numbers, and by . The usual Euclidean () vector norm is represented by . When applied to a matrix, denotes the induced matrix norm. A matrix is said to be positive definite, denoted , whenever for all , . By we denote the minimum and maximum eigenvalues of respectively. A function , is of class  if it is continuous, strictly increasing, and as . Given an essentially bounded function we denote by its norm, i.e., .

In the following we consider systems defined by differential equations of the form:


with input an essentially bounded piecewise continuous function of time and a smooth map. We also use the simpler notation to refer to (1). We refer to such systems as control systems. Solutions of (1) with initial condition and input , denoted by , satisfy: and for almost all . The notation will be relaxed by dropping the subindex when it does not contribute to the clarity of exposition. A feedback law for a control system is a smooth map ; we sometimes refer to such a law as a controller for the system.

3. Decentralized event-triggered control

Consider a nonlinear control system and a hardware platform consisting of a set of wireless sensors and actuators and a computation node. This last node is in charge of computing the control signal with the measurements obtained from the sensors. We consider scenarios in which none of these sensor nodes has access to the full state of the plant. We model the execution of the control loop in three steps: data retrieval from sensors, controller computation, and provision of the control commands to the actuators. Furthermore, we assume that the computation of the controller happens in just one device which retrieves all the measurement information from the sensors, computes the inputs for all actuators, and disseminates these new commands to the actuator nodes. This scenario is a typical configuration considered in the WirelessHART standard, see [23], which addresses the problem of scheduling links and channels for disseminating the information in WirelessHART networks.

Our goal is to provide a mechanism triggering the execution of the control loop which reduces the frequency of the controller updates. In order to reduce the frequency of controller updates we abandon the periodic transmission paradigm, and instead we propose to close the loop whenever certain events happen. In particular, we consider the event-triggered implementation techniques proposed in [18] which guarantee the asymptotic stability of the closed-loop system. These techniques, however, require the knowledge of the full state to decide when to trigger new updates, but such information is not available at any sensing node under our premises. In the following we discuss a decentralization of the decision process triggering controller updates. We propose the use of conditions depending solely on the information available at each node. Whenever any of these conditions is violated at a node, this node informs the computation device. Upon receipt of such an event, the computation device requests fresh measurements, updates the control signals, and forwards the new commands to the actuation nodes.

3.1. Event-triggered control

We begin by revisiting the results from [18], which serve as the basis for the rest of our work. Let us start by considering a nonlinear control system:


and assume that a feedback control law , is available, rendering the closed-loop system:


input-to-state stable (ISS) [24] with respect to measurement errors . We do not provide the definition of ISS, but rather the following characterization that lies at the heart of our techniques:

Definition 3.1.

A smooth function is said to be an ISS Lyapunov function for the closed-loop system (3) if there exists class functions ,, and such that for all and the following is satisfied:


The closed-loop system (3) is said to be ISS with respect to measurement errors , if there exists an ISS Lyapunov function for (3).

In a sample-and-hold implementation of the control law , the input signal is held constant between update times, i.e.:


where is a divergent sequence of update times. An event-triggered implementation defines such a sequence of update times for the controller, rendering the closed loop system asymptotically stable.

We now consider the signal defined by for and regard it as a measurement error. By doing so, we can rewrite (3.1) for as:

Hence, as (3) is ISS with respect to measurement errors , from (3.1) we know that by enforcing:


the following holds:

and asymptotic stability of the closed-loop follows. Moreover, if one assumes that the system operates in some compact set and and are Lipschitz continuous on , the inequality (6) can be replaced by the simpler inequality , for a suitably chosen . Hence, if the sequence of update times is such that:


the sample-and-hold implementation (3.1) is guaranteed to render the closed loop system asymptotically stable.

Condition (7) defines an event-triggered implementation that consists of continuously checking (7) and triggering the recomputation of the control law as soon as the inequality evaluates to equality. Note that recomputing the controller at time requires a new state measurement and thus resets the error to zero which enforces (7).

3.2. Decentralized event-triggering conditions

We consider, for simplicity of presentation, a decentralized scenario in which each state variable is measured by a different sensor. However, the same ideas apply to more general decentralized scenarios as we briefly discuss at the end of Section 3.3. In this setting, no sensor can evaluate condition (7), since (7) requires the knowledge of the full state vector . Our goal is to provide a set of simple conditions that each sensor can check locally to decide when to trigger a controller update, thus triggering also the transmission of fresh measurements from sensors to the controller.

Using a set of parameters such that , we can rewrite inequality (7) as:

where and denote the -th coordinates of and respectively. Hence, the following implication holds:


which suggests the use of:


as the local event-triggering conditions.

In this decentralized scheme, whenever any of the local conditions (9) becomes an equality, the controller is recomputed. We denote by the first time at which (9) is violated, when , . If the time elapsed between two events triggering controller updates is smaller than the minimum time between updates of the centralized event-triggered implementation111It was proved in [18] that such a minimum time exists for the centralized condition, and that lower bounds can be explicitly computed., the second event is ignored and the controller update is scheduled units of time after the previous update.

Not having an equivalence in (8) entails that this decentralization approach is in general conservative: times between updates will be shorter than in the centralized case. The vector of parameters can be used to reduce the mentioned conservatism and thus reduce utilization of the communication network. It is important to note that the vector can change every time the control input is updated. From here on we show explicitly this time dependence of by writing to denote its value between the update instants and . Following the presented approach, as long as satisfies , the stability of the closed-loop is guaranteed regardless of the specific value that takes and the rules used to update .

We summarize the previous discussion in the following proposition:

Proposition 3.2.

For any choice of satisfying:

the sequence of update times given by:

renders the system (3.1) asymptotically stable.

3.3. Decentralized event-triggering with on-line adaptation

We present now a family of heuristics to adjust the vector whenever the control input is updated. We define the decision gap at sensor at time as:

The heuristic aims at equalizing the decision gap at some future time. We propose a family of heuristics parametrized by an equalization time and an approximation order .

For the equalization time we present the following two choices: constant and equal to the minimum time between controller updates ; the previous time between updates .

The approximation order is the order of the Taylor expansion used to estimate the decision gap at the equalization time :

where for :

using the fact that and .

Finally, once an equalization time and an approximation order are chosen, the vector is computed so as to satisfy:


Note that finding such , after the estimates and have been computed, amounts to solving a system of linear equations. Note also that is computed222The resulting computed in this way could be such that for some sensor , . Such choice of results in an immediate violation of the triggering condition at , i.e., would be zero. In practice, when the unique solution of (10) results in , one resets to some default value such as the zero vector. in the controller node, which has access to .

The choice of and has a great impact on the amount of actuation required. The use of a large leads, in general, to poor estimates of the state of the plant at time and thus degrades the equalization of the gaps. On the other hand, one expects that equalizing at times as close as possible to the next update time (according to the centralized event-triggered implementation) provides larger times between updates. In practice, these two objectives (small , and close to the ideal ) can be contradictory, namely when the time between controller updates is large. The effect of the order of approximation depends heavily on and enlarging does not necessarily improve the estimates. An heuristic providing good results in several case studies performed by the authors is given by Algorithm 1.

Input: , , , ,
Compute according to equation (10);
if such that then  ;
Compute according to equation (10);
if such that then  ;
end end
Algorithm 1 The -adaptation heuristic algorithm.

While we assumed, for simplicity of presentation, that each node measured a single state of the system, in practice there may be scenarios in which one sensor has access to several (but not all) states of the plant. The same approach applies by considering local triggering rules of the kind , where is now the vector of states sensed at node , is its corresponding error vector, and is a scalar.

3.4. Comments on practical implementations

The proposed technique, while clearly reducing the amount of information that needs to be transmitted from sensors to actuators, might suggest that sensor nodes need to be continuously listening for events triggered at other nodes. This poses a practical problem since the energy required to keep the radios on to listen for possible events could potentially be very large. In practice, however, sensor nodes have their radio modules asleep most of the time and are periodically awaken according to a time multiplexing medium access protocol. Time multiplexing is typically used in protocols for control over wireless networks, like WirelessHART, in order to provide non-interference and strict delay guarantees. The use of time multiplexing can be accommodated in the proposed technique by regarding its effect as a bounded and known delay between the generation of an event and the corresponding change in the control signal. As was shown in [18], delays can be accommodated in event-triggered implementations by adequately reducing the value of , therefore making the triggering conditions more conservative.

4. Examples and simulation results

We present in what follows an example illustrating the effectiveness of the proposed technique. We select the quadruple-tank model from [25] describing the multi-input multi-output nonlinear system consisting of four water tanks as shown in Figure 1. The water flows from tanks and into tanks and , respectively, and from these two tanks to a reservoir. The state of the plant is composed of the water levels of the tanks: , , and . Two inputs are available: and , the input flows to the tanks. The input flows are split at two valves and into the four tanks. The positions of these valves are given as parameters of the plant. The goal is to stabilize the levels and of the lower tanks at some specified values and .

Figure 1. The quadruple-tank system.

The system dynamics are given by the equation:


and denoting gravity’s acceleration and and denoting the cross sections of the tank and outlet hole respectively.

The controller design from [25] requires the extension of the plant with two extra artificial states and . These states are nonlinear integrators used by the controller to achieve zero steady-state offset and evolve according to:

where and are design parameters of the controller. Note how stabilizing the extended system implies that in steady-state and converge to the desired values and . We assume in our implementation that the sensors measuring and , also compute and locally and sufficiently fast. Hence, we can consider and as regular state variables.

The controller proposed in [25] is given by the following feedback law:




and where is a positive definite matrix and is given by

where , , and are design parameters of the controller. Note how equation (12) can be used to compute and from the specified and . When computing the control , the remaining entries and of can be set to any arbitrary (fixed) values and . This can be done because the errors: and , between the arbitrary values and the actual states and of the equilibrium, can be reinterpreted as a perturbation on the initial states and .

Using this controller the following function:

which is positive definite and has a global minimum at , is an ISS Lyapunov function with respect to , as evidenced by the following bound 333The expression for the matrix is not included because of space limitations. The value of can be easily deduced from [25].:

This equation suggests the use of the triggering condition:

Moreover, assuming the operation of the system to be confined to a compact set containing a neighborhood of , can be bounded as and the following triggering rule can be applied to ensure asymptotic stability:

We simulated the decentralized event-triggered implementation of this controller following the techniques in Section 3. The physical parameters of the plant and the parameters of the controller are the same as those in [25]. Assuming that the system operates in the compact set , one can take , and for the choice of a value of was selected. A bound for the minimum time between controller updates, computed as explained in [18], is given by . The decentralized event-triggered controller is implemented adapting as specified by Algorithm 1 with . Furthermore, the pairs of states and are assumed to be measured at the same sensor node, and therefore combined in a single triggering condition at the respective nodes. For comparison purposes, we present in the first row of Figure 2 the time between controller updates, the evolution of the ratio vs and the state trajectories, for a centralized event-triggered implementation, starting from initial condition and setting and . The corresponding results for the proposed decentralized event-triggered implementation are shown in the second row of Figure 2, and the results for a decentralized event-triggered implementation without adaptation, i.e., with for all , are shown in the last row of the same figure.

Figure 2. Times between updates, evolution of the triggering condition, and evolution of the states for the centralized event-triggering implementation (first row), decentralized event-triggering implementation with adaptation (second row), and decentralized event-triggering implementation without adaptation (third row).

For completeness, Figure 3 presents the evolution of adaptation vector for the adaptive decentralized event-triggered implementation.

Figure 3. Adaptation parameter vector evolution for the adaptive decentralized event-triggered implementation.

We can observe that, as expected, a centralized event-triggered implementation is far more efficient, in terms of time between updates, than a decentralized event-triggered implementation without adaption. It is also clear that, although Algorithm 1 fails to recover the performance of the centralized event-triggered implementation exactly, it produces very good results. The results are even better if we look at the performance in terms of the number of executions which are presented in the legend of these plots. Finally we would like to remark that, although the times between updates in the three implementations can differ quite drastically, the three systems are stabilized producing almost undistinguishable state trajectories.

5. Discussion

In [22] Wang and Lemmon proposed a method for distributed event-triggered control under the assumption that the control system was composed of weakly coupled subsystems. Exploiting this fact, they were able to update inputs independently of each other. Our approach, while not updating inputs independently, does not rely on any internal weak coupling assumptions about the system. Thus, our techniques could be used to complement the techniques in [22] at the local subsystem level.

The proposed techniques have been shown effective in decentralizing an event-triggered implementation of a quadruple water-tank system. The centralized controller of this example is a dynamic controller. In this particular case, by allowing the dynamical part of the controller to be continuously computed by the sensors, we successfully obtained a decentralized event-triggered implementation. However, the implementation of general dynamic controllers in event-triggered form, centralized or not, remains a question for future research. The design of more efficient adaptation rules is another interesting question to investigate further. Finally, we would like to emphasize the low computational requirements of the proposed implementation, which makes it suitable for sensor/actuator networks with limited computation capabilities at the sensor level.


  • [1] WirelessHART. [Online]. Available:
  • [2] K.-E. Årzén, A. Bicchi, S. Hailes, K. Johansson, and J. Lygeros, “On the design and control of wireless networked embedded systems,” in 2006 IEEE Computer Aided Control System Design,, Oct. 2006, pp. 440 –445.
  • [3] P. Antsaklis and J. Baillieul, “Special issue on technology of networked control systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 5 –8, Jan. 2007.
  • [4] G. Franklin, “Rational rate [ask the experts],” Control Systems Magazine, IEEE, vol. 27, no. 4, pp. 19 –19, Aug. 2007.
  • [5] G. Goodwin, R. Middleton, and H. Poor, “High-speed digital signal processing and control,” Proceedings of the IEEE, vol. 80, no. 2, pp. 240 –259, Feb. 1992.
  • [6] G. Goodwin, S. Graebe, and M. Salgado, Control System Design.   Prentice Hall, 2001.
  • [7] C. Houpis and G. B. Lamont, Digital Control Systems.   McGraw-Hill Higher Education, 1984.
  • [8] D. Nesić and A. Teel, “Sampled-data control of nonlinear systems: An overview of recent results,” in Perspectives in robust control, ser. Lecture Notes in Control and Information Sciences, S. Moheimani, Ed.   Springer Berlin / Heidelberg, 2001, vol. 268, pp. 221–239.
  • [9] D. Nesić, A. Teel, and D. Carnevale, “Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems,” IEEE Transactions on Automatic Control, vol. 59, pp. 619–624, Mar. 2009.
  • [10] K.-E. Årzén, “A simple event based PID controller,” in Proceedings of 14th IFAC World Congress, vol. 18, 1999, pp. 423–428.
  • [11] K. Åström and B. Bernhardsson, “Comparison of riemann and lebesgue sampling for first order stochastic systems,” in Proceedings of the 41st IEEE Conference on Decision and Control, vol. 2, Dec. 2002, pp. 2011 – 2016.
  • [12] W. Heemels, J. Sandee, and P. van den Bosch, “Analysis of event-driven controllers for linear systems,” International Journal of Control, vol. 81, no. 4, pp. 571–590, 2008.
  • [13] A. Cervin and T. Henningsson, “Scheduling of event-triggered controllers on a shared network,” in 47th IEEE Conference on Decision and Control, Dec. 2008, pp. 3601 –3606.
  • [14] M. Rabi and K. H. Johansson, “Event-triggered strategies for industrial control over wireless networks,” in WICON, 2008.
  • [15] M. Rabi, K. H. Johansson, and M. Johansson, “Optimal stopping for event-triggered sensing and actuation,” in 47th IEEE Conference on Decision and Control., Dec. 2008, pp. 3607 –3612.
  • [16] A. Molin and S. Hirche, “Optimal event-triggered control under costly observations,” in Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems, 2010.
  • [17] J. Lunze and D. Lehmann, “A state-feedback approach to event-based control,” Automatica, vol. 46, no. 1, pp. 211 – 215, 2010.
  • [18] P. Tabuada, “Event-triggered real-time scheduling of stabilizing control tasks,” IEEE Transactions on Automatic Control, vol. 52, no. 9, pp. 1680 –1685, Sept. 2007.
  • [19] M. Mazo Jr. and P. Tabuada, “On event-triggered and self-triggered control over sensor/actuator networks,” in Proceedings of the 47th IEEE Conference on Decision and Control, 2008., Dec. 2008, pp. 435 –440.
  • [20] A. Anta and P. Tabuada, “To sample or not to sample: Self-triggered control for nonlinear systems,” IEEE Transactions on Automatic Control, vol. 55, pp. 2030–2042, Sep. 2010.
  • [21] M. Mazo Jr., A. Anta, and P. Tabuada, “An iss self-triggered implementation of linear controller,” Automatica, vol. 46, pp. 1310–1314, Aug. 2010.
  • [22] X. Wang and M. D. Lemmon, “Event-triggering in distributed networked systems with data dropouts and delays,” in Proceedings of the 12th International Conference on Hybrid Systems: Computation and Control, Apr. 2009, pp. 366–380.
  • [23] H. Zhang, P. Soldati, and M. Johansson, “Optimal link scheduling and channel assignment for convergecast in linear wirelesshart networks,” in Proceedings of the 7th international conference on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks, 2009, pp. 82–89.
  • [24] E. D. Sontag, “Input to state stability: Basic concepts and results,” in Nonlinear and Optimal Control Theory.   Springer, 2006, pp. 163–220.
  • [25] J. Johnsen and F. Allgöwer, “Interconnection and damping assignment passivity-based control of a four-tank system,” Lagrangian and Hamiltonian Methods for Nonlinear Control 2006, pp. 111–122, 2007.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description