Protocol Coding through Reordering of User Resources, Part II: Practical Coding Strategies

Protocol Coding through Reordering of User Resources, Part II: Practical Coding Strategies

Petar Popovski, Zoran Utkovski and K. F. Trillingsgaard Department of Electronic Systems, Aalborg University, Denmark Institute of Information Technology, University of Ulm, Germany Email: zoran.utkovski@uni-ulm.de, petarp@es.aau.dk,
Abstract

We use the term protocol coding to denote the communication strategies in which information is encoded through the actions taken by a certain communication protocol. In this work we investigate strategies for protocol coding via combinatorial ordering of the labelled user resources (packets, channels) in an existing, primary system. This introduces a new, secondary communication channel in the existing system, which has been considered in the prior work exclusively in a steganographic context. Instead, we focus on the use of secondary channel for reliable communication with newly introduced secondary devices, that are low-complexity versions of the primary devices, capable only to decode the robustly encoded header information in the primary signals. In Part I of the work we have characterized the capacity of the secondary channel through information–theoretic analysis. In this paper we consider practical strategies for protocol coding inspired by the information–theoretic analysis. It turns out that the insights from Part I are instrumental for devising superior design of error–control codes. This is demonstrated by comparing the error performance to the “naïve” strategy which is presumably available without carrying out the analysis in Part I. These results are clearly outlining both the conceptual novelty behind the discussed concept of secondary channel as well as its practical applicability.

I Introduction

A way to introduce new features and define a new communication channel over an existing wireless system, without introducing hardware/physical layer changes, is to encode information in the actions take by the protocol of the existing (primary) communication system. We denote such a class of communication strategies by the term protocol coding. In this work we are considering a particular type of protocol coding, in which information is encoded in the ordering of labelled resources (packets, channels) of the primary (legacy) users. For example, if in a given scheduling frame the primary system decides to send packets to Alice and packets to Bob, then the secondary transmitter gets the right to encode additional information by rearranging these packets. This can be done in different ways, such that in that scheduling frame the secondary transmitter can send bits. In this example, a scheduling frame contains packets and the primary decides the state of the frame (how many packets to Alice and Bob, respectively). In each state there are different number of possible rearrangements and the problem is that the state is not controlled by the secondary, which means that the amount of information that the secondary can send it variable and unpredictable.

The restriction that stems from operation of the primary is the key feature of the communication model. In Part I [1] we have elaborated on the capacity of the model by using a framework where the secondary communication channel was represented through a cascade of channels. Such a framework is alternative to a rather standard representation using Shannon’s model of channels with causal channel state information at the transmitter (CSIT), but very potent in our case as we have been able to compute the secondary capacity under quite general error model assumptions. The capacity calculation has lead to the concept of a multisymbol, which is a set of actual packet rearrangements that represent identical secondary input symbol.

In practice, a secondary channel can be defined over virtually any existing wireless system and it is of interest to find the coding strategies that are suited to a certain primary system. We can ask the following question: If we did not have the capacity derivation and the multisymbol framework that has emerged from it, described in  [1], what would be the “naïve” way to design a code in order to communicate over the secondary channel? Conversely, what does the information–theoretic strategy developed in the Part I of the paper teach us about designing good codes (signaling) strategies for this channel?

In the absence of the multisymbol framework, we can perform the encoding in the traditional way, by taking any usual error-correcting code and an interleaver. However, a problem arises when we attempt to send a sequence of bits over the secondary channel of frame length , which is in a given state which we do not control. If the sequence of bits we need to transmit is not one of the possible symbols, we should pick any (e. g. randomly) of the possible symbols which are at the same (minimal) Hamming distance from the bit sequence. For example, for a frame length we take four of the coded and interleaved bits and look at the current state of the channel (how many 1s we can transmit in the next frame). Then, we pick any (e. g. randomly) of the possible frames, obtained by permuting the packets, that has minimal possible Hamming distance. For example, when the system needs to transmit and the state is , it chooses randomly between and . However, this leads to ambiguity since the receiver might falsely interpret the transmitted sequence as being or , instead of . Hence, even when the channel does not introduce errors, there will be decoding errors.

Therefore, we have to look at other strategies which are applicable in the context of protocol coding. Besides its’ role in the capacity calculation, it turns out that the multisymbol framework can be used in the construction of error-correcting codes for secondary communication, since it gives an insight in the coding strategies that are approaching the capacity.

The rest of the paper is organized as follows. In Section II we present transmit strategies for the errorless case, i.e. the case when the probability of error for secondary reception is 0. In Section III we address the case of secondary communication when errors are present. First we investigate a naïve strategy for communication, which does not account for the specifics of secondary communication channels. Then, we propose a coding strategy which is inspired from the capacity results derived in Part I of the paper. In Section IV we present a trellis coding scheme where the trellis code is based on the multisymbol framework and evaluate its’ performance. Section V presents some distinguished features of secondary channels, potential applications and provides a discussion on the the limitations of the presented model for protocol coding. Finally, Section VI concludes the paper and gives directions for future work.

Ii Coding for Errorless Channels

Ii-a Motivation and System Model

In this section we introduce practical encoding strategies when the secondary channel is assumed errorless i. e. the headers (labels) of the primary packets are perfectly received at the secondary receiver. This is interesting, since using fixed-length codes necessarily leads to nonzero probability of error, despite the fact that there are no channel induced errors on the packet headers, which in turn cary the packet label, used for secondary communication. To see why this is the case, consider the example of a primary system with a scheduling frame with packets. Assume that one wants to encode secondary information by using consecutive frames. However, if the primary system decides the state to be in all frames (i. e. all the packets are addressed to Alice), then no secondary information can be encoded, which leads to error.

We consider a simplified model by having only two possible packet labels, i. e. each packet in a frame is addressed either to user or user . This setup is sufficient to illustrate the main strategies for protocol coding with resource ordering. We recall that the set of packets that are scheduled in a frame is decided by the primary system, i.e. the secondary communication is restricted and can only rearrange the set of packets selected by the primary. The state of the frame is the number of packets addressed to user and occurs with probability . In the errorless case, the state is known to both the transmitter and the receiver. Thus, each state is associated with one communication sub-channel with a capacity , as elaborated in Part I. In a given frame, the primary chooses the state with probability , independently of the states in the previous frames .

Let us consider frames, where each frame is a secondary channel use and let be the vector that describes the random outcome from observing the frames. As goes to infinity, the sequence becomes typical, such that the number of states that will have the value is approximately . The capacity of the the errorless channel can be calculated [2]:

(1)

A scheme that achieves this capacity would work as follows. Consider transmission of a large message by using large number of channel uses . The sender segments the message into sub-messages, where each of the sub-messages is sent over a separate sub-channel (state), which occurs with probability . The sub-message that is to be sent over the sub-channel defined by contains approximately bits. If during the th channel use the sender observes that the state , then it takes the next bits from the corresponding sub-message. Thus, the whole message is sent by time–interleaving of all the available sub–channels.

Nevertheless, in this paper we are focused on practical coding strategies and this scheme might be ineffective when sending messages of finite, short length . For example, when the number of channel uses is finite, some of the channel states might not appear at all. In this case, the above strategy would fail to send a part of the message, i.e. the sub-messages associated with those states. What we need is a practical coding scheme which is tailored to the observation that the secondary system does not have a control over the state of the channel.

Ii-B Coding with Variable Radix

Since using a fixed-length secondary coding block introduces errors, we need to send a group of bits by using multiple (variable) number of frames, where the number of frames to be used depends on the realization of the sequence of channel states. We introduce the proposed coding strategy by taking an example with frame of length .

Ii-B1 Example

Let us group the input bits into groups of size and let us call such a group of bits input symbol. Hence, one input symbol is a number between 0 and 1023. This input symbol will be sent by using several frames. Let the input symbol be the binary representation of , for example. Let be the channel state in the th frame and let us assume the following state sequence

The first state allows for different combinations. We divide the interval of integers in bins such that bins have a size of and bins have a size of . The bins (sub-intervals) are given as , , , , and . The number is in the bin . If we enumerate the possible combinations of packets when the state is as , corresponds to the combination . Hence, Alice arranges the two packets in the frame and sends in order to signal that the number is in the bin number . When Bob receives the first frame and sees that , then knows that it should use bins. In this way it gets the information that the number is in the -th bin, .

Now Alice will use the subsequent frames in order to specify where within the bin is the number represented by the bits. For the second frame, the state is . The number of possible packet rearrangements is . Therefore, the bin is divided into sub-bins, such that bins have size of and bins have size of . The bins are given as and . Since belongs to the third bin, and , in the second frame we choose the packet rearrangement . Now, Bob knows that the transmitted number lies in the bin .

When the third frame arrives, and no information can be sent by using combinatorial reordering.

We proceed in the same way for the rest of the frames. During the fourth frame the channel state is , which allows for different combinations, i.e the bin is divided into sub-bins. Since belongs to the second bin, , we send the combination (rearrangement) . During the fifth frame the channel state is , which allows for different combinations, i.e there are sub-bins. Since belongs to the first bin, , we send the combination (rearrangement) . During the sixth frame the channel state is , which allows for different combinations. We divide the bin in sub-bins, given as . Since belongs to the first bin, , we send the combination (rearrangement) . Bob sees that the channel state is , and knows that there are bins and that . This is over-dimensioned for the two bins we have left. Nevertheless, Alice can send in order to inform Bob that the number is .

Alternatively, Alice can apply additional optimization. Since , the representation of the numbers in the bin is overprovisioned. Since the fourth frame can have possible combinations, it means that it can represent the possible numbers in the bin and also possible bins for the next input symbol. Let the next input symbol be . We divide the interval in bins, such that the bins have a size of . Hence, the number is in bin . This bin can be sent jointly with the last bin of the first input symbol. Now the interval is divided into bins of size : . A needs to use the first bin, since is the first number in the uncertainty window for the first input symbol, but it will send the combination number (0110) in order to inform that is in the second bin of when is divided in bins as described above.

We note that for a different channel state sequence, the number of frames over which the coding was performed would differ. Hence, we speak of a variable-radix scheme which uses a variable number of frames to represent the secondary information bits. The variable radix scheme provides an effective mapping of the information bits into symbols, taking into account that the communication takes place over a multiple-state channel. Indeed, without having control of the channel state , we can not send the sequence of information bits by using a single frame of length as the number of different combinations may not be sufficient. On the other hand, by taking multiple frames and allowing the number of frames to vary, this is always possible, as shown in the example. In this sense we can think of this scheme as an efficient modulation scheme, since it primarily provides mapping of the information bits into symbols, by taking into account the unpredictable channel states.

The example described above should be sufficient for the reader to devise variable-radix schemes for another state sequence. The problem with the variable-radix scheme is that it works only under perfect, errorless conditions and in the next section we discuss the practical strategies for protocol coding that can deal with channel -induced errors.

Iii Coding for Channels with Errors

As already argued, the variable-radix scheme fails in the presence of channel errors and might lead to catastrophic behavior. For example, if the channel is mostly errorless, but one symbol is in error, then the receiver Bob will not know where the current symbol ends and where the next starts, such that the whole sequence of information bits will be in error.

With this argument on mind, in the case with channel errors we would prefer to code over a fixed number of frames (and thus allow errors even when there are no channel-induced errors on the packet labels). The question is which coding strategy would be applicable in this case. We first look at a naïve coding strategy which does not account for the specifics of the secondary channel. Later, we present a communication scheme which is inspired by the capacity results derived previously

Iii-a Naïve Coding Strategy

The naïve strategy works as follows. We take any usual error-correction code of rate and interleave the output of this code, e.  g. by using a pseudo-random interleaver. The motivation for using an interleaver is to break the burst bit errors that can occur within one secondary symbol (frame). For example, for a frame length we take four of the coded and interleaved bits and look at the current state of the channel (how many 1s we can transmit in the next frame). Then, we pick any (e. g. randomly) of the possible frames, obtained by permuting the packets, that has minimal possible Hamming distance. For example, let the coded bits are 0101 and let the state be . Then the Hamming distance of the “true information” 0101 from 0111, 1101 is 1 (minimal possible), while it is 3 from 1011 or 1110. Hence, when the system needs to transmit 0101 and the state is , it chooses randomly between 0111 and 1101.

The trouble with the described naïve strategy is that, even when the channel does not introduce errors, there will be decoding errors. Additionally, as we will see in Section IV, the naïve strategy has poor performance in channels with transmission errors.

Iii-B Coding Strategy inspired from the Information–Theoretic Analysis

Here we propose a coding strategy which is inspired by the capacity results for the secondary communication channel derived in Part I. We start by recalling that we modelled the secondary communication channel by using a cascade of two channels where the input constraints from the primary system were reflected in the way the set is defined. Then, instead of speaking about which strategies out of that are chosen with non–zero probability, we speak of which representatives to choose for given secondary input symbol . According to this, we introduced the notion of multisymbol, which is the set of representatives for given . Further, a minimal multisymbol was defined as a multisymbol obtained by permutation of the basic multisymbol, where the basic multisymbol has representatives defined as follows:

The minimal multisymbols satisfy the following Hamming distance relation for each . The main capacity result for the secondary channel obtained in Part I has been stated in the following form

The term is the capacity of the channel, given by (12) in Part I. We recall that this capacity is attained by the distribution and is upper bounded by the capacity of the channel defined by the underlying error model. is the minimal value (constant) of , achieved by the choice of the multisymbol as a minimal multisymbol. The equality is achieved if and only if there is a pair of distributions that simultaneously attains the maximum and the minimum in the first and the second term, respectively.

This result is quite general and holds for all classes of memoryless channels with binary inputs. Among other channels, it holds for the erasure channel, the binary symmetric channel and the channel. It has been further shown that for a uniform distribution over , this capacity can be achieved by a set of cardinality . The multisymbols should be minimal, meaning that the Hamming distance between two adjacent symbols is 1, .

From the viewpoint of capacity, the choice of the multisymbols is irrelevant, as long they are minimal and the distribution of fulfills the required condition. However, the choice of the multisymbols does affect the performance of the error-correcting code constructed based on the multisymbol framework.

Our aim is to use the multisymbol framework in the construction of practical coding schemes which are better suited for the secondary communication channel than the naïve approach. The question to ask is which criterion, e.g. distance metric we are going to use in the selection of the multisymbols. We adopt a heuristic approach and take the expected Hamming distance as the metric of interest for the choice of the multisymbols. The expected Hamming distance for two multisymbols and is defined as follows

(2)

where is the Hamming distance between the two vectors. Clearly, considering the triviality of the states and , we can simplify to:

(3)

The motivation behind this is that this metric incorporates the state of the channel which can not be controlled by the secondary system.

With this in mind, we can construct a convolutional code by using the multisymbols framework and the expected Hamming distance as design criterion. We define a trellis for the convolutional code with a certain number of states. In the trellis diagram, each state contains two outgoing paths, each of them corresponding to one possible input binary symbol. Also, each state has two incoming paths. Each branch in the trellis is associated with an input symbol and an output symbol. In our case, the input symbol is binary and the output symbol is one of the multisymbols. The trellis is chosen on purpose to have branches, such that each multisymbol is used only once.

Now, the question is how we associate multisymbols with the transitions in the trellis. We use the known rules from trellis coding: the output symbols on the branches exiting from the same state should be maximally separated in terms of the expected Hamming distance. The same is valid for the output symbols associated with the two branches that enter the same state. In order to illustrate the code construction, we take the example with , where the minimal cardinality of the uniform auxiliary variable is .

There are multiple ways in which the multisymbols can be chosen, and different sets have different features. We can get useful insights about the expected Hamming distance spectrum if we use the representation of the multisymbols as paths in the directed graph, as shown in Fig. 1. In order to maximize the expected Hamming distance between multisymbols, the paths corresponding to the multisymbols should be as diverse as possible. To assure this, we have to choose the multisymbols such to avoid (as much as possible) having multisymbols with common edges. Indeed, for two different multisymbols and which share a common edge , the terms in the expected Hamming distance

(4)

associated with that edge will be . The necessary condition to avoid a common edge between the nodes from and , where , is that . In other words, the edge weight should be at most .

We note once again that the relevance of the expected Hamming distance as a performance metric is stated as a conjecture which is not rigorously proved. Moreover, one has to look at the whole distance spectrum, in order to be able to predict the performance of the error-correcting code. Since in the general case it is difficult to control the code distance spectrum, we turn to the the minimal expected Hamming distance as a simplified indicator of the code performance. However, in this particular case we have to be careful when making conclusions about the performance which are based solely on the minimal distance. As we are going to see in the next section, some of the simulation results indicate that the minimal expected Hamming distance is not the only factor which is decisive for the error performance.

In the following we give three representative examples of the set of multisymbols , created for .

Iii-B1 Choice of multisymbols, Example 1

In the first example, we choose the multisymbols as given in Fig. 1 a). The multisymbols are chosen as a permutation of the basic multisymbol and fulfill the required property about the distribution of . Hence, this choice is capacity achieving, but we need to investigate its performance in terms of error rate when used to construct a channel code. We use the representation of the multisymbols as paths in the directed graph, as shown in Fig. 1 b). We note that in the graph representation, some of the multisymbols have common edges which can be avoided. This, for example, is the case with the multisymbols and . The expected Hamming distance profile for the above choice of the set of multisymbols reveals that the minimal distance is .

(a) (b)
Fig. 1: Selection of the representative sets for , Example 1. The selection of the multisymbols is not optimal. (a) Multisymbols for the inputs. (b) Graph representation of the process for selecting the multisymbols .

Iii-B2 Choice of multisymbols, Example 2

In the second example, we choose the multisymbols as given in Fig. 2 a). We observe that no two multisymbols are identical and the choice of the multisymbols is capacity achieving. The graph used for selection of the multisymbols is shown in Fig. 2 c). The multisymbols are constructed by using each edge of the graph exactly once, except for the edges between and , where common edges can not be avoided. Additionally, common edges are avoided later in the graph, by an adequate choice of the paths associated with the multisymbols. For example, we choose instead of in order to avoid a common edge with in the last section of the graph. The minimal expected Hamming distance for this choice of multisymbols is . We expect that this set will perform better compared to the set of multisymbols in Example 1, due to the better distance spectrum. This conjecture is confirmed in Section IV where we present the simulation results for the performance of these choices of multisymbols.

(a) (b)
(c)
Fig. 2: Selection of the representative sets for that achieve the capacity, Example 2. (a) Multisymbols for the inputs with minimal expected Hamming distance . (b) Multisymbols for the inputs with minimal expected Hamming distance 0.75. (c) Graph representation of the process for selecting the multisymbols . Both sets are constructed based on the same graph representation, only the selected paths in the graph for the corresponding multisymbols are different in the two sets.

The previous observations lead us to ask if there is a general strategy that produces the choice of the set of multisymbols for an arbitrary which has the maximal minimal expected Hamming distance? The answer is affirmative. Without giving a detailed proof, we will only note that this set is implicitly constructed in Appendix, Part I [1], where it is shown that it is always possible to choose the paths in the direct graph. A careful examination of the argumentation in the proof reveals that a set of multisymbols with the required properties can be constructed by the procedure described in the proof.

Surprisingly, we have been able to find a set of multisimbols with minimal expected Hamming distance which performs better than the above set with minimal distance , as presented in the simulation results in the next section. The set of multisymbols is presented on Fig. 2 b) and is obtained by using the same graph representation Fig. 2 c), only using different paths in the graph. We suspect that the reason for this behavior is that the minimal distance itself is not decisive for the performance, even if the conjecture that the expected Hamming distance is the relevant metric for the error-control coding holds. Probably, one has to look at both the distance spectrum and the trellis diagram in details, in order to make the right conclusion about the code performance. Nevertheless, the performance in both cases is superior to the naïve scheme, which makes the case for the relevance of the multisymbol framework in the design of practical error-control schemes.

Iii-B3 Choice of the multisymbols, Example 3

As a third example, we choose the multisymbols as shown in Fig. 3 a). We notice that, according to this choice, the multisymbols and are identical. Actually, besides , we have also and . We note that this choice does not violate the conditions for minimal multisymbols and satisfies the target distribution over , thus it is capacity achieving.

(a) (b)
Fig. 3: Selection of the representative sets for , Example 3. The selection of the multisymbols yields non-uniform distribution of . (a) Multisymbols for the inputs (b) Graph representation of the process for selecting the multisymbols .

At the first sight, this result seems counterintuitive and reveals the following problem: why we do not lose capacity even if we are not using the highest possible diversification at the input (in this case we assign the same multisymbol to two input symbols )? We note that the cardinality of the input symbols is not , but . However, they are non-uniformly distributed — for example, have probability and have probability . Until now, we have constrained ourselves to uniform distribution over the input symbols. However, it can be shown that if non–uniform distribution is used over , then capacity can be achieved even with . For this particular instance with , it can be shown that in the above example the capacity can be achieved by a set with cardinality . The probability distribution of the input symbols is for and for . In the following we specify only the nonzero members , the transition matrix for the channel . Note that the notation is slightly abused, with e. g. meaning or ): ; ; , and . The general case of with non–uniform distribution on and minimal required size to achieve the capacity is outside of the scope for this paper and is a topic of ongoing work.

In the following section we present worked-out examples of trellis codes based on the multisymbol framework.

Iv Code Design and Simulation Results

Iv-a Code Design

The coding scheme we propose is designed as a concatenation of an outer error correcting code, an interleaver and an encoder, as given in Fig. 4 a). The outer error correcting code is a convolutional code with rate , thus binary symbols are generated from symbols. As already discussed, the inner code is trellis based, each branch in the trellis is associated with an input symbol (binary) and output symbol which is one of the multisymbols. We associate multisymbols with the transitions in the trellis such that the output symbols on the branches exiting from the same state should be maximally separated in terms of expected Hamming distance. The same is valid for the output symbols associated with the two branches that enter the same state. The trellis encoder codes incoming binary symbols into multisymbols which are then impaired by the channel. In this part we assume a binary erasure channel. However, we have in mind that the capacity results presented in Part I are valid for a wider class of channels, among others the binary symmetric channel and the channel.

The symbol errors from a trellis code come in bursts since ending up in a wrong state implies more than one symbol error. To avoid bursts of errors, an interleaver is used. The interleaver is implemented as matrix with dimensions with divisible by . In order to illustrate the coding scheme, we take once again the example with . The trellis based coding scheme for defines a trellis with branches. One option is to consider a code with states and branches from each state or a code, which implies that the source information is originally encoded in ternary symbols. Another, more practical option is to have a trellis with states and branches from each state. The code construction uses the trellis code with states to avoid mapping from binary symbols to ternary symbols. This means that one binary symbol is transmitted for each multisymbol. The trellis design for the two sets of multisymbols introduced in Section III with respective minimal distance and is presented on Fig. 4. We recall that the first set was described by a directed graph where some of the multisymbols share common edges which could be avoided. The trellis is given in Fig. 4 b). The second set is chosen according to the criterion which maximizes the minimal expected Hamming distance. The trellis for the second set is illustrated in Fig. 4 c). In both cases, the multisymbols which are associated with the transitions in the trellis are chosen such that the output symbols on the branches exiting from the same state are maximally separated in terms of the expected Hamming distance. Similarly, in both cases, the distance between the output multisymbols is for all states. This is the maximal distance in the distance spectrum. The same is valid for the output symbols associated with the two branches that enter the same state.

(a) (b) (c)
Fig. 4: Code Design. a) Block diagram of the code. b) Trellis construction for the set of multisymbols of Example 1. (c) Trellis construction for the set of multisymbols of Example 2.

Iv-B Simulation Results

The simulations have been performed with and the results are averaged over 10000 iterations. The simulation is performed for packet lengths . These packet lengths are chosen such that (two tail bits are added by the outer convolutional code) is divisible by .

First, we compare the performance of the coding scheme inspired by the multisymbol framework and the naïve coding scheme, which does not account for the specifics of the secondary channel. The simulation results for the packet error rate (PER) for different erasure probability are shown in Fig. 5. This result present a clear evidence that the information-theoretic analysis carries a practical significance for the secondary communications channels.

We also perform simulations for the two different choices of the sets of multisymbols, with minimal expected Hamming distance and respectively, as presented in Section III. As already commented, although the choice of the set with minimal distance performs better than the set with minimal distance ( Fig. 6 a)), we were able to find another set with minimal distance which outperforms both sets (Fig. 6 a)). This result indicates that besides the minimal expected Hamming distance is not the only criterion which decides on the code performance, but there are also factors, notably the distance spectrum and the choice of the trellis transitions.


Fig. 5: Performance comparison between the naïve coding scheme and the scheme motivated from the multiuser framework
(a) (b)
Fig. 6: Performance of the error-correcting coding schemes. (a) Comparison between two sets with minimal distance and respectively. (b) Comparison between two sets with minimal distance and respectively.

V Discussion

V-a Some Features of the Secondary Channel

In Part I [1] we have described a generic application of this type of protocol coding and the associated secondary channel: communication with newly introduced devices, with limited functionality, in an area that is larger than the original coverage area. Several distinctive features can be noted for this type of secondary communication. First, for delay-constrained systems, the secondary data rate is relatively low, so it is hard to argue that this method brings a significant rate advantage. Second, the secondary rate depends on the current load (traffic, number of users) in the primary system. For example, in a cellular system where protocol coding is done by encoding information in the way the users are allocated to different channels, the best secondary rate on is obtained when each channel can be allocated to a different user, since this maximizes the number of possible rearrangements. Third, the new secondary devices have a limited implementation of the primary protocol stack, which brings opportunity for a low–complexity, low–power reception on the secondary channel. In the extreme case, protocol coding reckons only with two transmission states: packet transmitted and idle slot, as with the channel model, which would require the secondary device to use only power detection.

Header compression [3] may appear as a competitor as it works in a somewhat opposite way: tries to compress the overhead whenever the actual communication scenario allows it. However, this is not always cancelling the opportunity for secondary communication, and vice versa: for example, the MAC–layer identifiers may be compressed, but in the end all the users have to be differentiated and the secondary channel arises from reordering those identifiers. An interesting dividend is that the secondary capacity can be used to assess the performance margin of a certain primary protocol/system. Intuitively, if in a given scenario the secondary capacity is non–zero, then the operation of the primary system is not optimal.

The secondary channel, as defined here, can have several different application. A generic application of the secondary communication is sending of additional control data. The first usage of such a control data can be as expanded “future use” bits: in many standardized protocols there are unspecified, free bits for future use and protocol coding practically unleashes “hidden” future use bits in the protocol, which may become indispensable during the evolution of the system.

Another usage can be signaling for efficient spectrum sharing. The main concern in cognitive radio is the interference that the cognitive (secondary) users are causing to the incumbent (primary) user. Hence, a secondary user should sense if the spectrum resource is available for communication. Spectrum sensing is facilitated by a Cognitive Pilot Channel (CPC) [4], which conveys the necessary information to the terminals about the status of radio spectrum. Protocol coding inherently introduces a possibility to define an in–band CPC. For example, assume that, besides the module that can decode the secondary channel, the secondary devices have an additional cognitive radio interface to communicate with each other. Then the primary BS can dynamically send information about the available resources for cognitive radio. For example, if the primary system is a digital TV broadcaster, then secondary channel can be defined by reordering of the TV packets, which empowers the TV broadcast tower to dynamically control the spectrum usage. To the best of our knowledge, such a possibility to turn the TV broadcasters from victims into spectrum controllers has not been observed before.

In the emerging machine–to–machine (M2M) communication [5], cellular networks embrace a large number of low–cost, low–power devices, that have different traffic/behavior from the usual cellular users. Such a device device is mostly in a low–power “sleep” mode. We conjecture that, due to the simple codebooks used to send the primary control information, it can be decoded with a low power. A sleeping device may be tuned receive on the secondary channel and, upon receiving a downlink trigger from the BS, it can wake up another radio interface to send information. Thus, protocol coding offers an opportunity to introduce universal wake–up beacons.

V-B Protocol Coding in WiMAX: A Brief Case Study

We now illustrate the application of protocol coding with reordering of user resources to the WiMAX system. In WiMAX [6], the downlink and uplink control information is transmitted at the beginning of each frame, which includes preamble, frame control header (FCH) and MAP message. The MAP message indicates the resource allocation for downlink and uplink data and control signal transmission. The Base Station (BS) translates the QoS requirements of the Subscriber Stations (SSs) into the appropriate number of allocated slots. The BS informs about the scheduling to all SSs by using the DL_MAP (Downlink Medium Access Protocol) and UL_MAP (Uplink Medium Access Protocol) messages in the beginning of each frame [7].

Protocol coding is implemented by reordering the slots allocated in a frame. The secondary users for which this information is intended have only to read the broadcast DL_MAP and UL_MAP messages. For example, when the number of slots reserved for each of the SSs is 6,9,2,10,7,6,10,15,15,20 respectively, secondary bits can be sent by reordering of the resources. Assuming a frame duration of , this translates to we can have [kbps] of additional information, which is in the frame headers that are robustly protected [8]. In order to get an idea about the the distance where the MAP message is “detectable”, compared to the information data, we resort to the propagation model in [8], with the total path loss is given by [dB], where is in kilometers. The MAP is protected with times repetition coding, while and BPSK is used for both MAP message and data, which results in distance where the header is detectable compared to the distance for the user data.

V-C Further Considerations

We used a simplified model, in which the set of packets sent in a given frame is independent from the other frames. In practice this is rarely satisfied, since buffering at the primary scheduler and/or packet retransmission due to errors creates dependencies between consecutive frames. In such a case, Shannon’s result is not directly applicable and instead we need to use a more general model in which the sequence of frame states is not memoryless (see Section 6 in [9]).

Another aspect is the freedom of in reordering user resources. For example, if in the case of WiMAX the scheduler puts each user on a channel where she can achieve a high data rate, then the freedom to permute users across channels becomes restricted. It is incorrect to say that protocol coding is not applicable once such restrictions are put by the primary system, but it should rather be observed that the secondary capacity is decreased. This reiterates the observation that protocol coding can be used as a measure of how optimally given primary system operates.

We have mainly discussed the case of the combinatorial model with two possible packet values . In general, the number of possible packets in a frame can be , where the special case corresponds to the permutation model. As coding strategies, it is relevant to consider the permutation codes used in power line communication(PLC) [10]. The main idea is to send information by using ary Frequency Shift Keying (FSK). There are available orthogonal frequencies and transmission is done by creating a time sequence by which the frequencies are activated. This corresponds to a permutation without repetitions of size and the correspondent codes can be used as coding strategies in our permutation model. The main idea is to define a Hamming distance between two permutations and only permutations that are sufficiently distant are eligible for transmission in order to minimize the probability of error. A more general case is the one where the transmission symbol (frame) has frequencies and the th frequency appears times, such that . The codes used in that case are termed constant composition codes [11]. However, the main design constraint for permutation codes in PLC scenarios is not to create a disturbance to the electric power. This implies that there is a freedom to choose the set as long as the power constraint is satisfied; on the contrary, in our model the secondary transmitter must reckon with the set provided by the primary communication system. Nevertheless, the code design from frequency permutation arrays in PLC may be used to select the components of the multisymbols for protocol coding.

Vi Conclusion and Future Work

We have investigated practical strategies for protocol coding via combinatorial ordering of the user resources (packets, channels) in the primary system. By using the specific structure of our model, we have used the alternative framework that helps to compute the capacity in the development of coding strategies that are approaching the capacity. The developed coding strategies are superior to the naïve strategy which does not account for the specifics of the secondary communication channels. The coding design thus gives practical relevance to the framework developed for capacity characterization of secondary channels and paves the road for practical implementation. Besides the construction of the coding schemes, we also presented some additional features of protocol coding and pointed out possible applications of the concept in existing wireless systems.

A question for future work is how to compute the capacity and which coding strategies to use when the scheduling process in the primary system is generalized (buffering, retransmission, etc.). Another direction is to compute the capacity under error models different from the described, such as channels with deletions/insertions. In practice, a secondary channel can be defined over virtually any existing wireless system and therefore it is of interest to find the coding strategies that are suited to the actual protocol specification in a certain primary system.

References

  • [1] P. Popovski and Z. Utkovski, “Protocol Coding through Reordering of User Resources, Part I: Capacity Results,” submitted to IEEE Trans. Communications, 2012.
  • [2] P. Popovski and O. Simeone, “Protocol Coding for Two-Way Communications with Half-Duplex Constraints,” in IEEE GLOBECOM, Miami, FL, USA, Dec. 2010.
  • [3] H. Hannu, L.-E. Jonsson, R. Hakenberg, T. Koren, K. Le, Z. Liu, A. Martensson, A. Miyazaki, K. Svanbro, T. Wiebke, T. Yoshimura, and H. Zheng, “RObust header compression (ROHC): Framework and four profiles: RTP, UDP, ESP, and uncompressed,” in RFC 3095, Jul. 2001.
  • [4] J. Perez-Romero, O. Sallent, R. Agusti, and L. Giupponi, “A Novel On-demand Cognitive Pilot Channel enabling Dynamic Spectrum Allocation,” in Proc. IEEE DySPAN, Dublin, Ireland, Apr. 2007.
  • [5] S.-Y. Lien, K.-C. Chen, and Y. Lin, “Toward Ubiquitous Massive Accesses in 3GPP Machine-to-Machine Communications,” IEEE Communications Magazine, vol. 49, no. 4, pp. 66–74, Apr. 2011.
  • [6] J. G. Andrews, A. Ghosh, and R. Muhamed, Fundamentals of WiMAX.   Prentice-Hall, 2007.
  • [7] A. Sayenko, O. Alanen, J. Karhula, and T. Hämäläinen, “Wimax Overview and System Performance,” in Proceedings of the 9th ACM MSWiM, Torremolinos, Spain, Oct. 2006.
  • [8] F. Wang, A. Ghosh, C. Sankaran, and P. Fleming, “Wimax Overview and System Performance,” in Proc. IEEE VTC Fall, Sep. 2006.
  • [9] G. Keshet, Y. Steinberg, and N. Merhav, Channel Coding in the Presence of Side Information, ser. Foundations and Trends in Communications and Information Theory, 2007, vol. 4, no. 6.
  • [10] A. J. H. Vinck, “Coded Modulation for Power Line Communications,” AEÜ Journal, pp. 45–49, Jan. 2000.
  • [11] W. Chu, C. J. Colbourn, and P. Dukes, “Constructions for Permutation Codes in Powerline Communications,” Designs, Codes and Cryptography, Kluwer Academic Publishers, vol. 32, pp. 51–64, 2004.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
117517
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description