Multi-Rate Control over AWGN Channels via
Analog Joint Source–Channel Coding
We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source–channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source–channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon–Kotel’nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.
Networked control, Gaussian channel, joint source–channel coding.
Networked control systems, especially those for which the links connecting the different components of the system (plant, observer, and controller, say) are noisy, are increasingly finding applications and, as a result have been the subject of intense recent investigations [1, 2, 3]. In many of these applications the rate at which the output of the plant is sampled and observed, as well as the rate at which control inputs are applied to the plant, is different from the signaling rate with which communication occurs. We shall henceforth call such systems multi-rate networked control systems. The rate at which the plant is sampled and controlled is often governed by how fast the dynamics of the plant is, whereas the signaling rate of the communication depends on the bandwidth available, the noise levels, etc. As a result, there is no inherent reason why these two rates should be related and, in fact, the communication rate is almost always higher than the sampling rate.
This latest fact clearly gives us the opportunity to improve the performance of the system by having the possibility to convey information about each sampled output of the plant, and/or each control signal, through multiple uses of the communication channel. An obvious strategy is to simply repeat the transmitted signal (so-called repetition coding). In analog communication this simply adds a linear factor to the SNR (3 dB for a single repetition); in digital communication over a memoryless packet erasure link, say, it simply reduces the probability of packet loss exponentially in the number of retransmissions. A more sophisticated solution would be to first quantize the analog message (the sampled output or the control signal) and then protect the quantized bits with an error-correcting channel code whose block length is commensurate with the number of channel uses available per sample. A yet more sophisticated solution would be to use a tree code which collectively encodes the quantized bits in a causal fashion over all channel uses [4, 5, 6].
The latter two solutions implicitly assume what is called the “separation between source and channel coding”, i.e., that quantization of the messages and channel coding of the quantized bits (using either a block code or a tree code) can be done independently of one another. While this is asymptotically true in communication systems (where it is a celebrated result), it is not true for control systems where the overall objective is to minimize a linear-quadratic Gaussian (LQG) cost . To minimize an LQG cost what is needed is joint source–channel coding (JSCC). Unfortunately, in its full generality, this is known to be a notoriously difficult problem and so it has rarely been attempted (especially, in a control context). Nonetheless, this is what we shall attempt in this paper.
We assume the communication links are AWGN (additive white Gaussian noise) channels with a certain signal-to-noise ratio (SNR). As we show below, this SNR puts an upper limit on the size of the maximum unstable eigenvalue of the plant that can be stabilized. We further assume that the signaling rate of the communication channel is not much larger than the sampling rate of the plant, say only a factor of 2 to 10 larger. Thus, if one sets aside the (daunting) task of performing coding over multiple messages (a la tree codes) then one is left with constructing a joint source–channel code of relatively short length — something that could very well be feasible. In particular, since both the message and transmitted signals are analog, in this short block regime it is not even clear whether it is necessary to go through a digitization process. Thus, we shall focus on analog JSCC, originally proposed by Shannon  and Kotel’nikov , which can simply be viewed as an appropriately chosen nonlinear mapping from the analog message to the analog transmitted signal(s).
Finally, we should mention that we view this work as a first step and the results as preliminary. Nonetheless, these already indicate that one can obtain substantial gains (in the LQG cost) over simple schemes, such as repetition, by using the ideas mentioned above. The design of more sophisticated JSCC schemes, as well as a comprehensive comparison of different schemes will be deferred to future work.
Ii Problem Setup
We now formulate the control–communication setting that will be treated in this work, depicted also in Fig. 1. We concentrate on the simple case of a scalar full observable state and a scalar AWGN channel. The model and solutions can be extended to more complex cases of vector states and multi-antenna channels.
Consider the scalar system with the plant evolution:
where is the (scalar) state at time , is an AWGN of power , is a known scalar, and is the control signal. Assume further that is Gaussian with power .
The measured output is equal to the state corrupted by noise:
where is an AWGN of power .
In contrast to classical control settings, the observer and the controller are not co-located, and are connected instead via an AWGN channel
where is the channel output, is the channel input subject to a unit power constraint, and is an AWGN of power .111This representation is w.l.o.g., as the case of an average power and noise power , can always be transformed to an equivalent channel with average power and noise power by multiplying both sides of (3) by .
In this work we further assume that the observer knows all past control signals ; for a discussion of the case when such information is not available at the observer, see Section V.
Similarly to the classical LQG control (in which the controller and the observer are co-locatedco-located controller and observer), we wish to minimize the average stage LQG cost after the total number of observed samples :
for some non-negative constants , and , by designing appropriate operations at the observer [which also plays the role of the transmitter over (3)] and the controller [which also serves as the receiver of (3)]. The infinite horizon cost is defined as
To that end, we recall next known results from information theory for joint source–channel coding design with low delay.
Iii Low-Delay Joint Source–Channel Coding
In this section, we review known results from information theory and communications for transmitting an i.i.d. zero-mean Gaussian source of power over an AWGN channel (3).
The number of source samples generated per a time instant is not necessarily equal to the channel uses of (3) per the same time. In general, consider the case where channel uses of are available for every source samples of .
The goal of the transmitter is to convey the source to the receiver with a minimal possible average distortion, where the appropriate distortion measure for our case of interest is the mean square error distortion.
To that end, the transmitter applies a mapping that transforms every source samples to channel inputs:
such that the input power constraint is satisfied:
The receiver, upon receiving the channel outputs of (3) — corresponding to the transmitted channel inputs — applies a mapping to these measured outputs to recover estimates of the source samples:
The resulting average distortion of this scheme is
and the corresponding (source) signal-to-distortion ratio (SDR) is defined as
Our results here are more easily presented in terms of unbiased errors, as these can be regarded as uncorrelated additive noise in the sequel (when used as part of the developed control scheme). Therefore, we consider the use of (sample-wise) correlation-sense unbiased estimators (CUBE), namely, estimators that satisfy
We note that any estimator can be transformed into a CUBE by multiplying by a suitable constant:
for a further discussion of such estimators and their use in communications the reader is referred to .
Shannon’s celebrated result  states that the minimal achievable distortion, using any transmiter–receiver scheme, is dictated, in the case of a Gaussian source, by222The rate–distortion function here is written in terms of the unbiased SDR, in contrast to the more common biased SDR expression .
where is the rate–distortion function of the source and is the channel capacity . Thus, the optimal SDR, commonly referred to as optimum performance theoretically achievable (OPTA) SDR, is given by
Shannon’s proof (for a more general case of not necessarily Gaussian source or channel) is based upon the separation principle, according to which the source samples are partitioned into blocks and quantized together, resulting in (approximately) uniform independent bits. These bits are then partitioned again into blocks and encoded together to form the channel inputs. At the receiver first the coded bits are recovered, followed by the reconstruction of the source samples from these bits.
However, this compression–coding separation-based technique is optimal only in the limit when the blocklengths and grow to infinity for a fixed ratio between the two, which implies, in turn, very large delays.
For finite blocklengths, (14) cannot be exactly attained, except for specific cases in which the source and the distortion measure are probabilistically matched to the channel , and strictly tighter outer bounds on the distortion can be derived [13, 14, 15]. One eminent case where such a matching occurs is that of a Gaussian source and a Gaussian channel with matching number of samples/uses . In this case, sending each source sample as is, up to a possible power adjustment, proves optimal and achieves (14) with (and hence also any other positive integer) . Unfortunately, this breaks down when , and consequently led to the proposal and study of various techniques for low-delay JSCC schemes.333The term JSCC is somewhat misleading, as in many of these schemes there is no use of digital components, let alone coding, including the Shannon–Kotel’nikov (SK) maps which are described in detail and used in the sequel.
We next concentrate on the simple case of and . That is, the case in which one source sample is conveyed over two channel uses.
A naïve approach is to send the source as is over both channel uses, up to a power adjustment. The corresponding unbiased SDR in this case is
a linear improvement rather than an exponential one as in (14). This scheme approaches (14) for very low SNRs, but suffers great losses at high SNRs. We note that the linear factor 2 comes from the fact that the total power available over both channel uses has doubled, and the same performance can be attained by allocating all of the available power to the first channel use and remaining silent during the second channel use.
This suggests that better mappings that truly exploit the extra channel use can be constructed. The first to propose an improvement for the 1:2 case were Shannon  and Kotel’nikov , in the late 1940s. In their works, the source sample is viewed as a point on a single-dimensional line, whereas the two channel uses correspond to a two-dimensional space. In these terms, the linear scheme corresponds to mapping the one-dimensional source line to a straight line in the two-dimensional channel space (see Fig. 2), and hence clearly cannot provide any improvement (since AWGN is invariant to rotations). However, by mapping the one-dimensional source line into a two-dimensional curve that fills better the space, a great boost in performance can be attained. Specifically, consider the Archimedean bi-spiral, which was considered in several works [17, 18, 19, 20] (depicted in Fig. 2):
where determines the rotation frequency, the factor is chosen to satisfy the power constraint, and the term is needed to avoid overlap of the curve for positive and negative values of (for each of which now corresponds a distinct spiral, and the two meet only at the origin). This spiral allows to effectively improve the resolution w.r.t. small noise values, since the one-dimensional source line is effectively stretched compared to the noise, and hence the noise magnitude shrinks when the source curve is mapped (contracted) back. However, for large noise values, a jump to a different branch — referred to as a threshold effect — may occur, incurring a large distortion. Thus, the value needs to be chosen to be as large as possible to allow maximal stretching of the curve for the same given power, while maintaining a low threshold event probability. The SDRs for different values of are depicted in Fig. (a)a.
Another ingredient that is used in conjunction with (16) is stretching prior to mapping it to a bi-spiral using :
The choice promises great boost in performance in the region of high SNRs, as is seen in Fig. (b)b. We further note that although the optimal decoder is a minimum mean square error (MMSE) estimator , in this case, the maximum-likelihood (ML) decoder, , achieves similar performance for moderate and high SNRs. A joint optimization of and for each SNR, for both ML and MMSE decoding, was carried out in  and is depicted in Fig. 3.
A desired property of the linear JSCC schemes is their SDR proportional improvement with the channel SNR (“SNR universality”). Such an improvement is not allowed by the separation-based technique, as it fails when the actual SNR is lower than the design SNR, and does not promise any improvement for SNRs above it. This motivated much work in designing JSCC schemes whose performance improves with the SNR, even for the case of large blocklengths [21, 22, 23]. The schemes in these works achieve optimal performance (14) for a specific design SNR (14), and improve linearly for higher SNRs. Similar behavior is observed also in Fig. 3 where the optimal value varies with the (design) SNR, and mimics closely the quadratic growth in the SDR. Above the design SNR, linear growth is achieved for a particular choice of .
We further note that the distortion component incurred when a threshold event happens, grows with . To avoid this behavior, instead of increasing the magnitude proportionally to the phase as in (17), we increase it slightly faster at a pace that guarantees that the incurred distortion does not grow with :
for some . This has only a slight effect on the resulting SDRs, as is illustrated in Fig. 3.
Finally note that in no way do we claim that the spiral-based Shannon–Kotel’nikov (SK) scheme is optimal. Various other techniques exist, most using a hybrid of digital and analog components [24, 25, 26], which outperform the spiral-based scheme for various parameters. Nevertheless, this scheme is the earliest technique to be considered and gives good performance boosts which suffice for our demonstration.
Iv Control via Low-Delay JSCC
In this section we construct a Kalman-filter-like solution  by employing JSCC schemes. We note that the additional complication here is due to the communication channel (3) and its inherent input power constraint.
Denote by the estimate of at the receiver given , where ‘r’ stands for ‘receiver’, and by the estimate of given , where ‘t’ stands for ‘transmitter’. Denote further their mean square errors (MSEs) by and , where and .
Then, the scheme works as follows. At time instant , the controller constructs an estimate of . It then applies the control signal to the plant, for a pre-determined gain . Note that, since both the controller and the observer know the previously applied control signals , they also know and .
Hence, in order to describe , the controller aims to convey its best estimate of the state . To that end, it can save transmit power by transmitting the error signal , instead of . The controller can then add back to the received signal to construct .
Note that even in the case of a fully observable state, i.e., when , the state is corrupted by when conveyed over the AWGN channel (3) to the controller. The performance of the transmission and the estimation processes applied by the observer and the controller, respectively, determine in turn, the total effective observation noise.
The general scheme used throughout this work is detailed below.
Observer/Transmitter: At time
Generates the desired error signal
of average power (determined in the sequel).
Since the channel input is subject to a unit power constraint, is normalized:
Constructs channel inputs corresponding to , using a bounded-distortion JSCC scheme of choice of rate ratio with (maximum given any input) average distortion for the given channel SNR.
Sends the channel inputs over the channel (3).
Controller/Receiver: At time
Receives the channel outputs corresponding to time sample .
Recovers a CUBE of the source signal : , where is an additive noise of power of (at most) .
Unnormalizes to construct an estimate of :
(23a) (23b) (23c)
Constructs an estimate of from all received channel outputs until and including at time . Since , the linear MMSE estimate amounts to444If the resulting effective noise is not an AWGN with power that does not depend on the channel input, then a better estimator than that in (24) may be constructed.
with an MSE of
Generates the control signal ( is given next):
and the receiver prediction of the next system state
The control (LQG) signal gain is given by (see, e.g., ):
The estimates can be generated via Kalman filtering (see, e.g., ):
where the Kalman filter coefficients are generated via the recursion :
Theorem 1 (Achievable)
where is the average stage cost achievable at the observer,555Alternatively, this is the cost in the limit . and and are the infinite-horizon values of and , respectively; and are given as the positive solutions of
The following theorem is an adaptation of the lower bound in  to our setting of interest.
Theorem 2 (Lower bound)
It is interesting to note that in this case, in stark contrast to the classical LQG setting in which the system is stabilizable for any values of , and , low values of SDR render the system unstable. Hence, it provides, among others, the minimal required transmit power for the system to remain stable. The difference from the classical LQG case stems from the additional input power constraint, which effectively couples the power of the observation noise with that of the estimation error, and was previously observed in, e.g., [29, 30, 28, 7] for the fully-observed setting.
Iv-a Source–Channel Rate Match
In this subsection we treat the case of , namely, where the sample rate of the control system and the signaling rate of the communication channel match.
As we saw in Section III, analog linear transmission of a Gaussian source over an AWGN channel achieves optimal performance (even when infinite delay is allowed), namely, the OPTA SDR (14), and given any input value. Thus, the JSCC scheme that we use in this case is linear transmission — the source is transmitted as is, up to a power adjustment [recall (20) and (21)]:
The optimal averate stage LQG cost is illustrated in Fig. 4, where the normalized in time LQG cost is evaluated for a system with and two SNRs — 2 and 4. satisfies the stabilizability condition , whereas fails to do so. Unit LQG penalty coefficients and unit driving noise and observation noise powers are used.
Iv-B Source–Channel Rate Mismatch
We now consider the case of channel uses per sample. As we saw in Section III, linear schemes are suboptimal outside the low-SNR region. Instead, by using non-linear maps, e.g., the (modified) Arcimedean spiral-based SK maps (18), better performance can be achieved.
We note that the improvement in the SDR of the JSCC scheme is substantial when is of the order of . That is, when the of the linear scheme is close to , using an improved scheme with better improves substantially the LQG cost. Unfortunately, the spiral-based SK schemes do not promise any improvement for SNRs below 5dB under maximum-likelihood (ML) decoding.
By replacing the ML decoder with an MMSE one, strictly better performance can be achieved over the linear scheme for all SNR values.
The resulting effective noise at the output of the JSCC receiver is not necessarily Gaussian, and hence the resulting system states , are not necessarily Gaussian either. Nevertheless, for the bounded-distortion scheme (18), this has no effect on the resulting performance.
V Discussion and Future Research
In this paper we considered the simplest case of scalar systems, and and . Clearly, an (exponentially) large gain in performance can be achieved for .
Interestingly, for the case of vector observed, state and control signals, even if the signaling rate of the channel and the sample rate of the observer are equal (rate matched case), conveying several analog observations over a single channel input may be of the essence. This is achieved by a compression JSCC scheme, e.g., by reversing the roles of the source and the channel inputs in the SK spiral-based scheme and similarly promises exponentially growing gains with the SNR and dimension; see [8, 9, 18, 19, 17, 26, 31].
In this work, we assumed that the observer knows all past control signals. This case can be viewed as a two-sided side-information scenario. Nevertheless, although this is a common situation in practice, there are scenarios in which the observer is oblivious of the control signal applied or has only a noisy measurement of control signal generated by the controller. Such settings can be regarded as a JSCC problem with side information at the receiver (only), and can be treated using JSCC techniques designed for this case, some of which combine naturally with the JSCC schemes for rate mismatch [32, 25, 26]. In fact, this idea was recently applied for the related problem of communication over an AWGN channel with AWGN feedback in 
Finally, note that for the case of bounded noise (even worst-case/arbitrary bounded noise), using Hilbert curves (see, e.g., ) can provide a desirable solution and is extendable for any and rate ratios.
-  J. P. Hespanha, P. Naghshtabrizi, and U. Xu, “A survey of recent results in networked control systems,” Proc. IEEE, vol. 95, pp. 138–162, 2007.
-  V. Gupta, A. F. Dana, J. P. Hespanha, R. M. Murray, and B. Hassibi, “Data transmission over networks for estimation and control,” IEEE Trans. Auto. Control, vol. 54, no. 8, pp. 1807–1819, Aug. 2009.
-  L. Schenato, B. Sinopoli, M. Franceschetti, K. Poola, and S. S. Sastry, “Foundations of control and estimation over lossy networks,” Proc. IEEE, vol. 95, no. 1, pp. 163–187, Jan. 2007.
-  A. Sahai and S. K. Mitter, “The necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link—part I: Scalar systems,” IEEE Trans. Inf. Theory, vol. 52, no. 8, pp. 3369–3395, Aug. 2006.
-  R. T. Sukhavasi and B. Hassibi, “Error correcting codes for distributed control,” IEEE Trans. Auto. Control, accepted, Jan. 2016.
-  A. Khina, W. Halbawi, and B. Hassibi, “(Almost) practical tree codes,” in Proc. IEEE Int. Symp. on Inf. Theory (ISIT), Barcelona, Spain, July 2016.
-  S. Tatikonda, A. Sahai, and S. K. Mitter, “Stochastic linear control over a communication channel,” IEEE Trans. Auto. Control, vol. 49, no. 89, pp. 1549–1561, Sep. 2004.
-  C. E. Shannon, “Communication in the presence of noise,” IRE Trans. Info. Theory, vol. 37, no. 1, pp. 10–21, Jan. 1949.
-  V. A. Kotel’nikov, The Theory of Optimum Noise Immunity. New York: McGraw-Hill, 1959.
-  Y. Kochman, A. Khina, U. Erez, and R. Zamir, “Rematch-and-forward: Joint source/channel coding for parallel relaying with spectral mismatch,” IEEE Trans. Inf. Theory, vol. 60, no. 1, pp. 605–622, 2014.
-  C. E. Shannon, “A mathematical theory of communication,” Bell Sys. Tech. Jour., vol. Vol. 27, pp. 379–423, July 1948.
-  M. Gastpar, B. Rimoldi, and M. Vetterli, “To code, or not to code: Lossy source–channel communication revisited,” IEEE Trans. Inf. Theory, vol. 49, no. 5, pp. 1147–1158, May 2003.
-  J. Ziv and M. Zakai, “On functionals satisfying a data-processing theorem,” IEEE Trans. Inf. Theory, vol. 19, no. 3, pp. 275–283, 1973.
-  A. Ingber, I. Leibowitz, R. Zamir, and M. Feder, “Distortion lower bounds for finite dimensional joint source–channel coding,” in Proc. IEEE Int. Symp. on Inf. Theory (ISIT), Toronto, Canada, July 2008.
-  S. Tridenski, R. Zamir, and A. Ingber, “The Ziv–Zakai–Rényi bound for joint source–channel coding,” IEEE Trans. Inf. Theory, vol. 61, no. 8, pp. 4293–4315, Aug. 2015.
-  T. Goblick, “Theoretical limitations on the transmission of data from analog sources,” IEEE Trans. Inf. Theory, vol. 11, pp. 558–567, 1965.
-  S. Y. Chung, “On the construction of some capacity-approaching coding schemes,” Ph.D. dissertation, Dept. EECS, Massachusetts Institute of Technology, Cambridge, MA, USA, 2000.
-  F. Hekland, P. A. Floor, and T. A. Ramstad, “Shannon–Kotel’nikov mappings in joint source–channel coding,” IEEE Trans. Comm., vol. 57, no. 1, pp. 94–105, Jan. 2009.
-  Y. Hu, J. Garcia-Frias, and M. Lamarca, “Analog joint source–channel coding using non-linear curves and MMSE decoding,” IEEE Trans. Comm., vol. 59, no. 11, pp. 3016–3026, Nov. 2011.
-  I. Kvecher and D. Rephaeli, “An analog modulation using a spiral mapping,” in Proc. IEEE Conv. Electrical and Electron. Engineers in Israel (IEEEI), Eilat, Israel, Nov. 2006.
-  U. Mittal and N. Phamdo, “Hybrid digital-analog (HDA) joint source–channel codes for broadcasting and robust communications,” IEEE Trans. Inf. Theory, vol. 48, no. 5, pp. 1082–1102, May 2002.
-  Z. Reznic, M. Feder, and R. Zamir, “Distortion bounds for broadcasting with bandwidth expansion,” IEEE Trans. Inf. Theory, vol. 52, no. 8, pp. 3778–3788, Aug. 2006.
-  Y. Kochman and R. Zamir, “Analog matching of colored sources to colored channels,” IEEE Trans. Inf. Theory, vol. 57, no. 6, pp. 3180–3195, June 2011.
-  M. Kleiner and B. Rimoldi, “Asymptotically optimal joint source–channel with minimal delay,” in Proc. IEEE Globecom, Honolulu, HI, USA, Nov./Dec. 2009.
-  X. Chen and E. Tuncel, “Zero-delay joint source–channel coding using hybrid digital–analog schemes in the Wyner–Ziv setting,” IEEE Trans. Comm., vol. 62, no. 2, pp. 726–735, Feb. 2014.
-  E. Akyol, K. B. Viswanatha, K. Rose, and T. A. Ramstad, “On zero-delay source–channel coding,” IEEE Trans. Inf. Theory, vol. 60, no. 12, pp. 7473–7489, Dec. 2014.
-  D. P. Bertsekas, Dynamic Programming and Optimal Control, 2nd ed. Belmont, MA, USA: Athena Scientific, 2000, vol. I.
-  V. Kostina and B. Hassibi, “Rate–cost tradeoffs in control,” in Proc. Annual Allerton Conf. on Comm., Control, and Comput., Monticello, IL, USA, Sep. 2016.
-  J. S. Freudenberg, R. H. Middleton, and V. Solo, “Stabilization and disturbance attenuation over a Gaussian communication channel,” IEEE Trans. Auto. Control, vol. 55, no. 3, pp. 795–799, Mar. 2010.
-  J. S. Freudenberg, R. H. Middleton, and J. H. Braslavsky, “Stabilization and disturbance attenuation over a Gaussian communication channel,” in Proc. IEEE Conf. Decision and Control (CDC), New Orleans, LA, USA, Dec. 2007, pp. 3958–3963.
-  A. Ingber and M. Feder, “Power preserving 2:1 bandwidth reduction mappings,” in Proc. of the Data Comp. Conf., Snowbird, UT, USA, Mar. 1997.
-  Y. Kochman and R. Zamir, “Joint Wyner-Ziv/dirty-paper coding by modulo-lattice modulation,” IEEE Trans. Inf. Theory, vol. 55, pp. 4878–4899, Nov. 2009.
-  A. Ben-Yishai and O. Shayevitz, “The Gaussian channel with noisy feedback: Near-capacity performance via simple interaction,” in Proc. Annual Allerton Conf. on Comm., Control, and Comput., Monticello, IL, USA, Oct. 2014, pp. 152–159.