On the Optimality of Simple Schedules for Networks with Multiple Half-Duplex Relays

On the Optimality of Simple Schedules for Networks with Multiple Half-Duplex Relays

Martina Cardone, Daniela Tuninetti and Raymond Knopp M. Cardone and R. Knopp are with the Mobile Communications Department at Eurecom, Biot, 06410, France (e-mail: cardone@eurecom.fr; knopp@eurecom.fr). Eurecom’s research is partially supported by its industrial partners: BMW Group Research & Technology, IABG, Monaco Telecom, Orange, SAP, SFR, ST Microelectronics, Swisscom and Symantec. The research at Eurecom leading to these results has received funding from the EU Celtic+ Framework Program Project SHARING and from a 2014 Qualcomm Innovation Fellowship. D. Tuninetti is with the Electrical and Computer Engineering Department of the University of Illinois at Chicago, Chicago, IL 60607 USA (e-mail: danielat@uic.edu). The work of D. Tuninetti was partially funded by NSF under award number 1218635; the contents of this article are solely the responsibility of the author and do not necessarily represent the official views of the NSF. D. Tuninetti would like to acknowledge insightful discussions with Dr. Salim El Rouayheb on sumbodular functions. The results in this paper were submitted in part to the 2015 IEEE Information Theory Workshop.
Abstract

This paper studies networks with half-duplex relays assisting the communication between a source and a destination. In ISIT’12 Brahma, Özgür and Fragouli conjectured that in Gaussian half-duplex diamond networks (i.e., without a direct link between the source and the destination, and with non-interfering relays) an approximately optimal relay scheduling policy (i.e., achieving the cut-set upper bound to within a constant gap) has at most active states (i.e., at most out of the possible relay listen-transmit states have a strictly positive probability). Such relay scheduling policies were referred to as simple. In ITW’13 we conjectured that simple approximately optimal relay scheduling policies exist for any Gaussian half-duplex multi-relay network irrespectively of the topology. This paper formally proves this more general version of the conjecture and shows it holds beyond Gaussian noise networks. In particular, for any memoryless half-duplex -relay network with independent noises and for which independent inputs are approximately optimal in the cut-set upper bound, an approximately optimal simple relay scheduling policy exists. A convergent iterative polynomial-time algorithm, which alternates between minimizing a submodular function and maximizing a linear program, is proposed to find the approximately optimal simple relay schedule. As an example, for -relay Gaussian networks with independent noises, where each node in equipped with multiple antennas and where each antenna can be configured to listen or transmit irrespectively of the others, the existence of an approximately optimal simple relay scheduling policy with at most active states is proved. Through a line-network example it is also shown that independently switching the antennas at each relay can provide a strictly larger multiplexing gain compared to using the antennas for the same purpose.

Approximate capacity, half-duplex networks, linear programming, relay scheduling policies, submodular functions.

I Introduction

Adding relaying stations to today’s cellular infrastructure promises to boost network performance in terms of coverage, network throughput and robustness. Relay nodes, in fact, provide extended coverages in targeted areas, offering a way through which the base station can communicate with cell-edge users. Moreover, the use of relay nodes may offer a cheaper and lower energy consumption alternative to installing new base stations, especially for regions where deployment of fiber fronthaul solutions are impossible. Depending on the mode of operation, relays are classified into two categories: Full-Duplex (FD) and Half-Duplex (HD). A relay is said to operate in FD mode if it can receive and transmit simultaneously over the same time-frequency-space resource, and in HD mode otherwise. Although higher rates can be attained with FD relays, due to practical restrictions (such as the inability to perfectly cancel the self-interference [1, 2]) currently employed relays operate in HD mode, unless sufficient isolation between the antennas can be achieved.

Motivated by the current practical importance of relaying stations, in this paper we study networks where the communication between a source and a destination is assisted by HD relays. In particular each relay is assumed to operate in time division duplexing, i.e., in time it alternates between transmitting and receiving. In such a network there are possible listen-transmit states whose probability must be optimized. Due to the prohibitively large complexity of this optimization problem (i.e., exponential in the number of relays ) it is critical to identify, if any, structural properties of such networks that can be leveraged in order to find optimal solutions with limited complexity. This paper uses properties of submodular functions and Linear Programs (LP) to show that a class of memoryless HD multi-relay networks has indeed intrinsic structural properties that guarantee the existence of approximately optimal simple relay scheduling policies that can be determined in polynomial time.

I-a Related Work

The different relaying strategies studied in the literature are largely based on the seminal work by Cover and El Gamal [3] on memoryless FD relay channels. In [3] the authors proposed a general outer bound (now known as the max-flow min-cut outer bound, or cut-set for short) and two achievable strategies named Decode-and-Forward (DF) and Compress-and-Forward (CF). In [4], these bounds were extended to networks with multiple FD relays. The capacity of a multi-relay network is open in general. In [5], the authors showed that for Gaussian noise networks with FD relays Quantize-reMap-and-Forward (QMF)—a network generalization of CF—achieves the cut-set upper bound to within bits per channel use, with and being the number of transmit and receive antennas, respectively, of node . For single-antenna nodes, this gap was reduced to bits per channel use in [6] by means of a novel transmission strategy named Noisy Network Coding (NNC)—also a network generalization of CF. In [7, 8], the authors showed that for Gaussian FD multi-relay networks with a sparse topology, namely diamond networks without a direct source-destination link and with FD non-interfering relays, the gap is of bits per channel use.

Relevant past work on HD multi-relay networks comprises the following papers. By following the approach of [9], in [10] the authors evaluated the cut-set upper bound for Gaussian multi-relay networks and, for the case of single-antenna nodes, they showed that a lattice-code implementation of QMF is optimal to within bits per channel use [10, Theorem 2.3]. Recently, in [11] we showed that the gap can be reduced to bits per channel use by using NNC. In general, finding the capacity of a single-antenna Gaussian HD multi-relay network is a combinatorial problem since the cut-set upper bound is the minimum between bounds (one for each possible cut in the network), each of which is a linear combination of relay states (since each relay can either transmit or receive). Thus, as the number of relays increases, optimizing the cut-set bound becomes prohibitively complex. Identifying structural properties of the cut-set upper bound, or of a constant gap approximation of the cut-set upper bound, is therefore critical for efficient numerical evaluations and can have important practical consequences for the design of simple / reduced complexity relay scheduling policies.

In [12], the authors analyzed the single-antenna Gaussian HD diamond network with relays and proved that at most states, out of the possible ones, suffice to approximately (to within a constant gap) characterize the capacity. We say that these states are active (have a strictly positive probability) and form an (approximately) optimal simple schedule. In [13], Brahma et al verified through extensive numerical evaluations that single-antenna Gaussian HD diamond networks with relays have (approximately) optimal simple schedules and conjectured this to be true for any . In [14], Brahma et al’s conjecture was proved for single-antenna Gaussian HD diamond networks with relays; the proof is by contradiction and uses properties of submodular functions and LP duality but requires numerical evaluations; for this reason the authors could only prove the conjecture for , since for larger values of “the computational burden becomes prohibitive” [14, page 1]. Our numerical experiments in [15] showed that Brahma et al’s conjecture holds for general single-antenna Gaussian HD multi-relay networks (i.e., not necessarily with a diamond topology) with ; we conjectured that the same holds for any . If our more general version of Brahma et al’s conjecture is true, then single-antenna Gaussian HD multi-relay networks have (approximately) optimal simple schedules irrespectively of their topology, i.e., known results for diamond networks are not a consequence of the simplified / sparse network topology. In this work, we formally prove the conjecture for a general Multiple-Input-Multiple-Output (MIMO) Gaussian HD multi-relay network and show that this result holds beyond Gaussian noise networks.

In [11] we also discussed polynomial-time algorithms to determine the (approximately) optimal simple schedule and their extensions beyond relay networks. Other algorithms seeking to determine optimal relay scheduling policies, but not focused on characterizing the minimum number of active states, are available in the literature. The authors of [16] proposed an iterative algorithm to determine the optimal schedule when the relays use DF. In [17] the authors proposed a ‘grouping’ technique to find the relay schedule that maximizes the approximate capacity of certain Gaussian HD relay networks, including for example layered networks; because finding a good node grouping is computationally complex, the authors proposed an heuristic approach based on tree decomposition that results in polynomial-time algorithms; as for diamond networks in [13], the low-complexity algorithm of [17] relies on the ‘simplified’ topology of certain networks. As opposed to these works, we propose a polynomial-time algorithm that determines the (approximately) optimal simple relay policy with a number of active states at most equal to the number of relays plus one for any network topology.

The first step in the derivation of our main result uses [18, Theorem 1] that states that for FD relay networks “under the assumption of independent inputs and noises, the cut-set bound is submodular”; wireless erasure networks, Gaussian networks and their linear deterministic high-SNR approximations are examples for which [18, Theorem 1] holds.

I-B Contributions

In this work we study multi-relay HD networks. In particular, we seek to identify properties of the network that allow for the reduction of the complexity in computing an (approximately) optimal relay scheduling policy. Our main contributions can be summarized as follows:

1. We formally prove Brahma et al’s conjecture beyond the Gaussian noise case. In particular, we prove that for any HD network with relays, with independent noises and for which independent inputs in the cut-set bound are approximately optimal, the optimal relay policy is simple. The key idea is to use the Lovász extension and the greedy algorithm for submodular polyhedra to highlight structural properties of the minimum of a submodular function. Then, by using the saddle-point property of min-max problems and the existence of optimal basic feasible solutions for LPs, an (approximately) optimal relay policy with the claimed number of active states can be shown.

2. We propose an iterative algorithm to find the (approximately) optimal simple relay schedule, which alternates between minimizing a submodular function and maximizing a LP. The algorithm runs in polynomial-time (in the number of relays ) since the unconstrained minimization of a submodular function can be performed in strongly polynomial-time and a LP maximization can also be performed in polynomial-time.

3. For Gaussian noise networks with multi-antenna nodes, where the antennas at the relays may be switched between transmit and receive modes independently of one another, we prove that NNC is optimal to within  bits per channel use per antenna, and that an (approximately) optimal schedule has at most active states (as in the single-antenna case) regardless of the total number of antennas in the system. We also show, through two examples, that switching independently the antennas at each relay achieves in general higher rates than using all of them for the same purpose (either listen or transmit).

I-C Paper Organization

The rest of the paper is organized as follows. Section II describes the general memoryless HD multi-relay network. Section III first summarizes some known results for submodular functions and LPs, then proves the main result of the paper, and finally designs a polynomial-time algorithm to find the (approximately) optimal simple relay schedule. Section IV applies the main result to Gaussian noise networks with multi-antenna nodes. In particular, we first show that NNC achieves the cut-set outer bound to within a constant gap that only depends on the total number of antennas, then we prove that the number of active states only depends on the number of relays (and not on the number of antennas) and we finally show that switching independently the antennas at each relay achieves higher rates than using all of them for the same purpose (either listen or transmit). Section V concludes the paper. Some proofs may be found in Appendix.

I-D Notation

In the rest of the paper we use the following notation convention. With we indicate the set of integers from to . For an index set we let . For two sets , indicates that is a subset of , represents the union of and , while represents the intersection of and . With we denote the empty set and indicates the cardinality of the set . Lower and upper case letters indicate scalars, boldface lower case letters denote vectors and boldface upper case letters indicate matrices (with the exception of , which denotes a vector of length with components ). denotes the all-zero column vector of length , while is the all-zero matrix of dimension . is a column vector of length of all ones and is the identity matrix of dimension . is the determinant of the matrix and is the trace of the matrix . For a vector we let be a diagonal matrix with the entries of on the main diagonal, i.e., , where is the Kronecker delta function. To indicate the block matrix , we use the Matlab-inspired notation ; for the same block matrix , the notation indicates a submatrix of where only the blocks in the rows indexed by the set and the blocks in the columns indexed by the set are retained. is the absolute value of and is the norm of the vector ; is the complex conjugate of , is the transpose of the vector and is the Hermitian transpose of the vector . indicates that is a proper-complex Gaussian random variable with mean and variance . indicates the expected value; for and .

Ii System Model

A memoryless relay network has one source (node 0), one destination (node ), and relays indexed from to . It consists of input alphabets (here is the input alphabet of node  except for the source / node 0 where, for notation convenience, we use rather than ), output alphabets (here is the output alphabet of node ), and a transition probability . The source has a message uniformly distributed on for the destination, where denotes the codeword length and the transmission rate in bits per channel use (logarithms are in base ). At time , , the source maps its message into a channel input symbol , and the -th relay, , maps its past channel observations into a channel input symbol . The channel is assumed to be memoryless, that is, the following Markov chain holds for all

 (W,Yi−1[1:N+1],Xi−1[1:N+1])→X[1:N+1],i→Y[1:N+1],i.

At time , the destination outputs an estimate of the message based on all its channel observations as . A rate is said to be -achievable if there exists a sequence of codes indexed by the block length such that for some . The capacity is the largest non-negative rate that is -achievable for any .

In this general memoryless framework, each relay can listen and transmit at the same time, i.e., it is a FD node. HD channels are a special case of the memoryless FD framework in the following sense [9]. With a slight abuse of notation compared to the previous paragraph, we let the channel input of the -th relay, , be the pair , where as before and is the state random variable that indicates whether the -th relay is in receive-mode () or in transmit-mode (). In the HD case the transition probability is specified as . In particular, when the -th relay, , is listening () the outputs are independent of , while when the -th relay is transmitting () its output is independent of all other random variables.

The capacity of the HD multi-relay network is not known in general, but can be upper bounded by the cut-set bound

 C ≤maxPX[1:N+1],S[1:N]minA⊆[1:N]I(rand)A, (1)

where

 I(rand)A :=I(XN+1,XAc,SAc;YN+1,YA|XA,SA) (2) ≤H(SAc)+I(fix)A, (3)

for

 I(fix)A :=I(XN+1,XAc;YN+1,YA|XA,S[1:N]) (4) =∑s∈[0:1]Nλs fs(A), (5)

where

 λs :=P[S[1:N]=s]∈[0,1]:∑s∈[0:1]Nλs=1, (6) fs(A) :=I(XN+1,XAc;YN+1,YA|XA,S[1:N]=s),s∈[0:1]N. (7)

In the following, we use interchangeably the notation to index all possible binary vectors of length , as well as, to indicate the decimal representation of a binary vector of length . in (2) is the mutual information across the network cut when a random schedule is employed, i.e., information is conveyed from the relays to the destination by switching between listen and transmit modes of operation at random times [9] (see the term in (3)). in (4) is the mutual information with a fixed schedule, i.e., the time instants at which a relay transitions between listen and transmit modes of operation are fixed and known to all nodes in the network [9] (see the term in the conditioning in (4)). Note that fixed schedules are optimal to within bits.

Iii Simple schedules for a class of HD multi-relay networks

We next consider networks for which the following holds: there exists a product input distribution
(8a)
for which we can evaluate the set function in (4) for all and bound the capacity as
(8b)
where and are non-negative constants that may depend on but not on the channel transition probability. In other words, we concentrate on networks for which using independent inputs and a fixed relay schedule in the cut-set bound provides both an upper (to within bits) and a lower (to within bits) bounds on the capacity.
The main result of the paper is:
Theorem 1.

If in addition to the assumptions in (8) it also holds that

1. the “noises are independent,” that is

 PY[1:N+1]|X[1:N+1],S[1:N]=∏i∈[1:N+1]PYi|X[1:N+1],S[1:N], (8c)
2. and that the functions in (7) are not a function of , i.e., they can depend on the state but not on the ,

then simple relay policies are optimal in (8b), i.e., the optimal probability mass function has at most non-zero entries / active states.

We first give some general definitions and summarize some properties of submodular functions and LPs in Section III-A, we then prove Theorem 1 in Sections III-B-III-E, by also illustrating the different steps of the proof for the case . Finally, in Section III-F we discuss the computational complexity of finding (approximately) optimal simple schedules.

Iii-a Submodular Functions, LPs and Saddle-point Property

The following are standard results in submodular function optimization [19] and LPs [20].

Definition 1 (Submodular function, Lovász extension and greedy solution for submodular polyhedra).

A set-function is submodular if and only if, for all subsets , we have 111A set-function is supermodular if and only if is submodular, and it is modular if it is both submodular and supermodular..

Submodular functions are closed under non-negative linear combinations.

For a submodular function such that , the Lovász extension is the function defined as

 ˆf(w):=maxx∈P(f)wTx,∀w∈RN, (9)

where is the submodular polyhedron defined as

 P(f):={x∈RN:∑i∈Axi≤f(A), ∀A⊆[1:N]}. (10)

The optimal in (9) can be found by the greedy algorithm for submodular polyhedra and has components

 xπi=f({π1,…,πi})−f({π1,…,πi−1}),∀i∈[1:N], (11)

where is a permutation of such that the weights are ordered as , and where by definition .

The Lovász extension is a piecewise linear convex function.

Proposition 2 (Minimum of submodular functions).

Let be a submodular function and its Lovász extension. The minimum of the submodular function satisfies

 minA⊆[1:N]f(A)=minw∈[0:1]Nˆf(w)=minw∈[0,1]Nˆf(w),

i.e., attains its minimum at a vertex of the cube .

Definition 2 (Basic feasible solution).

Consider the LP

 maximizecTxsubject toAx≤bx≥0,

where is the vector of unknowns, and are vectors of known coefficients, and is a known matrix of coefficients. If , a solution for the LP with at most non-zero values is called a basic feasible solution.

Proposition 3 (Optimality of basic feasible solutions).

If a LP is feasible, then an optimal solution is at a vertex of the (non-empty and convex) feasible set . Moreover, if there is an optimal solution, then an optimal basic feasible solution exists as well.

Let be a function of two vector variables and . By the minimax inequality we have

 maxy∈Yminx∈Xϕ(x,y)≤minx∈Xmaxy∈Yϕ(x,y)

and equality holds if the following three conditions hold: (i) and are both convex and one of them is compact, (ii) is convex in and concave in , and (iii) is continuous.

Iii-B Overview of the Proof of Theorem 1

The objective is to show that simple relay policies are optimal in (8b). The proof consists of the following steps:

1. We first show that the function defined in (4) is submodular under the assumptions in (8).

2. By using Proposition 2, we show that the problem in (8b) can be recast into an equivalent max-min problem.

3. With Proposition 4 we show that the max-min problem is equivalent to solve a min-max problem. The min-max problem is then shown to be equivalent to solve max-min problems, for each of which we obtain an optimal basic feasible solution by Proposition 3 with the claimed maximum number of non-zero entries.

We now give the details for each step in a separate subsection.

Iii-C Proof Step 1

We show that in (4) is submodular. The result in [18, Theorem 1] showed that in (7) is submodular for each relay state under the assumption of independent inputs and independent noises (the same work provides an example of a diamond network with correlated inputs for which the cut-set bound is neither submodular nor supermodular). Since submodular functions are closed under non-negative linear combinations (see Definition 1), this implies that is submodular under the assumptions of Theorem 1. For completeness, we provide the proof of this result in Appendix A, where we use Definition 1 as opposed to the “diminishing marginal returns” property of a submodular function used in [18].

Example for N=2

In this setting we have possible cuts, each of which is a linear combination of possible listen/transmission configuration states. In particular, from (5) we have

 A=∅,I(% fix)∅:=λ0f0(∅)+λ1f1(∅)+λ2f2(∅)+λ3f3(∅),A={1},I(fix){1}:=λ0f0({1})+λ1f1({1})+λ2f2({1})+λ3f3({1}),A={2},I(fix){2}:=λ0f0({2})+λ1f1({2})+λ2f2({2})+λ3f3({2}),A={1,2},I(fix){1,2}:=λ0f0({1,2})+λ1f1({1,2})+λ2f2({1,2})+λ3f3({1,2}),

where, , we have that the functions in (7) are given by

 fs(∅) :=I(X3,X2,X1;Y3|S[1:2]=s), fs({1}) :=I(X3,X2;Y3,Y1|X1,S[1:2]=s), fs({2}) :=I(X3,X1;Y3,Y2|X2,S[1:2]=s), fs({1,2}) :=I(X3;Y3,Y2,Y1|X2,X1,S[1:2]=s),

and are submodular under the assumptions in (8).

Iii-D Proof Step 2

Given that in (4) is submodular, we would like to use Proposition 2 to replace the minimization over the subsets of in (8b) with a minimization over the cube . Since in general, we define a new submodular function

 g(A):=I(fix)A−I(fix)∅ (12)

and proceed as follows

 minA⊆[1:N]I(fix)A =I(fix)∅+minA⊆[1:N]g(A) =minw∈[0,1]N[1wπ1wπ2…wπN]⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣I(fix)∅I(fix){π1}−I(fix)∅⋮I(fix){π1,…,πN}−I(fix){π1,…,πN−1}⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (13)

which implies that the problem in (8b) is equivalent to

 C′=maxλvectminw∈[0,1]N{[1,wT] Hπ,fλvect}, (14)

where is the probability mass function of in (6), is defined as

 Hπ,f :=Pπ ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣100…0−110…00−11…0⋮00…−11⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦(N+1)×(N+1)Fπ∈R(N+1)×2N, (15)

where is the permutation matrix that maps into , and is defined as

 Fπ :=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣f0(∅)…f2N−1(∅)f0({π1})…f2N−1({π1})f0({π1,π2})…f2N−1({π1,π2})…f0({π1,…,πN})…f2N−1({π1,…,πN})⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦∈R(N+1)×2N, (16)

with being defined in (7). We thus expressed our original optimization problem in (8b) as the max-min problem in (14).

Example for N=2

With , we have and the Lovász extension (see Definition 1) is

 (17)

A visual representation of the Lovász extension in (17) on is given in Fig. 1, where we considered , and (recall ).

Let

 iM :=argmax{w1,w2}andim:=argmin{w1,w2}. (18)

The optimization problem in (13) for can be written as

 min0≤wim≤wiM≤1⎧⎪⎨⎪⎩[1wiMwim]⎡⎢⎣100−1100−11⎤⎥⎦Fπ⎫⎪⎬⎪⎭ =min0≤wim≤wiM≤1{[1−wiMwiM−wimwim]Fπ}, (19)

with

 Fπ =⎡⎢⎣f0(∅)f1(∅)f2(∅)f3(∅)f0({iM})f1({iM})f2({iM})f3({iM})f0({1,2})f1({1,2})f2({1,2})f3({1,2})⎤⎥⎦, (20)

and finally the optimization problem in (14) is

 C′=maxλvectmin0≤wim≤wiM≤1⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩[1−wiMwiM−wimwim]Fπ⎡⎢ ⎢ ⎢⎣λ0λ1λ2λ3⎤⎥ ⎥ ⎥⎦⎫⎪ ⎪ ⎪⎬⎪ ⎪ ⎪⎭. (21)

Iii-E Proof Step 3

In order to solve (14) we would like to reverse the order of and . We note that the function satisfies the properties in Proposition 4 (it is continuous; it is convex in by the convexity of the Lovász extension and linear (under the assumption in item 2 in Theorem 1), thus concave, in ; the optimization domain in both variables is compact). Thus, we now focus on the problem

 C′=minw∈[0,1]Nmaxλvect{[1,wT] Hπ,fλvect}, (22)

which can be equivalently rewritten as

 C′ =minπ∈PNminwπ∈[0:1]Nmaxλvect{[1,wTπ] Hπ,fλvect} (23) =minπ∈PNmaxλvectminwπ∈[0:1]N{[1,wTπ] Hπ,fλvect}, (24)

where is the set of all the permutations of . In (23), for each permutation , we first find the optimal , and then find the optimal . This is equivalent to (24), where again by Proposition 4, for each permutation , we first find the optimal , and then find the optimal .

Let now consider the inner optimization in (24), that is, the problem

 P1:maxλvectminwπ∈[0:1]N{[1,wTπ] Hπ,fλvect}. (25)

From Proposition 2 we know that, for a given , the optimal is a vertex of the cube . For a given , there are vertices whose coordinates are ordered according to . In (25), for each of the feasible vertices of , it is easy to see that the product is equal to a row of the matrix . By considering all possible feasible vertices compatible with we obtain all the rows of the matrix . Hence, is equivalent to

 P2:maximizeτsubject to1(N+1)τ≤Fπλvectand1T2Nλvect=1, λvect≥02N, τ≥0. (26)

The LP in (26) has optimization variables ( values for and one value for ), constraints, and is feasible (consider for example the uniform distribution of and ). Therefore, by Proposition 3, has an optimal basic feasible solution with at most non-zero values. Since (otherwise the channel capacity would be zero), it means that has at most non-zero entries.

Since for each the optimal in (24) has at most non-zero values, then also for the optimal permutation the corresponding optimal has at most non-zero values. This shows that the (approximately) optimal schedule in the original problem in (8b) is simple.

This concludes the proof of Theorem 1.

Example for N=2

For , we have possible permutations. From Proposition 2, the optimal is one of the vertices . Let now focus on the case and (a similar reasoning holds for and as well). Under this condition in (25) is the problem in (21) with and . The vertices compatible with this permutation are , which result in . This implies that in (26) is

 P2:maximizeτsubject toτ≤f0(∅)λ0+f1(∅)λ1+f2(∅)λ2+f3(∅)λ3,τ≤f0({1})λ0+f1({1})λ1+f2({1})λ2+f3({1})λ3,τ≤f0({1,2})λ0+f1({1,2})λ1+f2({1,2})λ2+f3({1,2})λ3,λ0+λ1+λ2+λ3=1, λi≥0 i∈[0:3], τ≥0, (27)

where each of the three inequality constraints correspond to a different row of multiplied by . Therefore, in (27) has four constraints (three from the rows of and one from ) and five unknowns (one value for and four entries of ). Thus, by Proposition 3, has an optimal basic feasible solution with at most four non-zero values, of which one is and thus the other (at most) three belong to .

By [11, Appendix C], we know that either or is zero, thus giving the desired (approximately) optimal simple schedule.

Remark 1.

In order to apply the saddle-point property (see Proposition 4) and hence cast our optimization problem as a LP, the proof of Step 3 requires that the matrix does not depend on ; this is the reason of our assumption in item 2 in Theorem 1. In our Gaussian noise example (see Section IV), this excludes the possibility of power allocation across the relay states because power allocation makes the optimization problem non-linear in .

Remark 2.

As stated in Theorem 1, the assumptions in (8) provide a set of sufficient conditions for the existence of an (approximately) optimal simple schedule. Since those conditions are not necessary, there might exist networks for which the assumptions in (8) are not satisfied, but for which the (approximately) optimal schedule is still simple. Determining necessary conditions for optimality of simple schedules is an interesting challenging open question.

Remark 3.

For FD relays, it was showed in [18] that wireless erasure networks, Gaussian networks with single-antenna nodes and their linear deterministic high-SNR approximations are examples for which the cut-set bound (or an approximation to it) is submodular. Since submodular functions are closed under non-negative linear combinations (see Definition 1), this implies that the cut-set bound (or an approximation to it) is still submodular when evaluated for these same networks with HD relays. As a consequence, Theorem 1 holds for wireless erasure networks, Gaussian networks with single-antenna nodes and their linear deterministic high-SNR approximations with HD relays.

Iii-F On the complexity of finding the (approximately) optimal simple schedule

Our proof method for Theorem 1 seems to suggest that finding the (approximately) optimal schedule requires the solution of different LPs. Since , the computational complexity of such an approach would be prohibitive for large . Next we propose a polynomial-time algorithm in to determine the (approximately) optimal simple schedule for any network regardless of its connectivity / topology.

The idea is to use an iterative method that alternates between a submodular function minimization over and a LP maximization over . The saddle-point property in Proposition 4, which holds with equality in our setting, ensures that the algorithm converges to the optimal solution. The pseudo-code of the proposed algorithm is given below. The algorithm runs in polynomial-time since:

1. the unconstrained minimization of our submodular function can be solved in strongly polynomial-time in ; in particular, the algorithm in [21] runs in , with being the time the algorithm needs to compute in (7) for any subset and for each state ;

2. by strong duality, the dual of our LP maximization in (14) with unknowns can be solved in polynomial-time in ; in particular, the ellipsoid method in [22] has complexity .

Iv Example: the Gaussian noise case with multi-antenna nodes

In this section we show that Theorem 1 applies to the practically relevant Gaussian noise network where the nodes are equipped with multiple antennas and where the relays operate in HD mode. The complex-valued power-constrained Gaussian MIMO HD relay network has input/output relationship

 y=Heqx+z∈C(mtot+mN+1)×1, (28a) Heq:=[Imtot−S0mtot×mN+10mN+1×mtotImN+1] H [S0mtot×m00m0×mtotIm0], (28b)

where

• is the number of antennas at the source, is the number of antennas at relay with (i.e., is the total number of antennas at the relays), and is the number of antennas at the destination.

• is the vector of the received signals with being the received signal at node .