# Random Access in C-RAN for User Activity Detection with Limited-Capacity Fronthaul

###### Abstract

Cloud-Radio Access Network (C-RAN) is characterized by a hierarchical structure in which the baseband processing functionalities of remote radio heads (RRHs) are implemented by means of cloud computing at a Central Unit (CU). A key limitation of C-RANs is given by the capacity constraints of the fronthaul links connecting RRHs to the CU. In this letter, the impact of this architectural constraint is investigated for the fundamental functions of random access and active User Equipment (UE) identification in the presence of a potentially massive number of UEs. In particular, the standard C-RAN approach based on quantize-and-forward and centralized detection is compared to a scheme based on an alternative CU-RRH functional split that enables local detection. Both techniques leverage Bayesian sparse detection. Numerical results illustrate the relative merits of the two schemes as a function of the system parameters.

## I Introduction

In a Cloud-Radio Access Network (C-RAN) the baseband processing functionality is implemented at a centralized cloud processor or Central Unit (CU) on behalf of multiple distributed Remote Radio Heads (RRHs). This is made possible by the fronthaul links that connect the RRHs to the CU, with possible media that include fiber optic cables, DSL last-mile links or wireless mmwave channels. The capacity limitations of the fronthaul links, along with the associated latency, are understood to offer the most significant challenge to the implementation of C-RANs [1].

A fundamental network function is random access, which is carried out by user equipments (UEs) when first accessing the system. Random access is attracting renewed interest due to the expected increase in the number of UEs in 5G networks, with particular reference to massive access in Internet-of-Things applications (see, e.g., [2]). One of the main goals of the random access procedure is for the network to identify the set of active UEs in order to enable resource allocation. In the context of C-RANs, random access for initial access has a rather novel aspect, as there is no single radio access point to which the terminal is associated.

In this letter, we study user activity detection (UAD) for a C-RAN architecture with the aim of investigating solutions that address the mentioned fronthaul capacity limitations. We assume that the UEs employ non-orthogonal sequences so as to accommodate a potentially massive number of UEs, e.g., machine-type devices, and we do not assume any a priori knowledge of the instantaneous small-scale fading channel realizations. Under the further assumption that the number of active UEs is significantly smaller than the total number of UEs, the signal received at the RRH is sparse with respect to the set of the UE signatures. As a result, the UAD problem becomes one of sparse signal recovery (see, e.g., [3, 4, 5] and references therein). A sparsity-based algorithm for UAD in C-RANs was recently proposed in [6] using a Bayesian formulation under the assumption of ideal, i.e., infinite-capacity, fronthaul links.

With the aim of investigating the impact of fronthaul capacity limitations, in this letter we first study the standard C-RAN implementation, whereby the RRHs quantize the received samples for transmission on the fronthaul links to the CU, which performs centralized baseband processing for UAD (see, e.g., [1]). To the best of our knowledge, the impact of quantization on UAD has not been studied to date. Furthermore, we also consider a baseline scheme that adopts an alternative CU-RRH functional split, in the sense of, e.g., [7], in which part of the baseband processing is carried out at the RRHs. In particular, each of the RRHs performs local UAD, and forwards quantized soft information associated with the local decision in the form of log-likelihood ratios (LLRs) to the CU. The two schemes, which are referred to as Quantize-and-Forward (QF) and Detect-and-Forward (DtF), are compared via numerical results in terms of the trade-off between the fraction of correctly detected active UEs and the fraction of incorrectly detected inactive UEs.

Notation: Uppercase/lowercase boldface letters denote matrices/vectors. Random quantities are represented with standard fonts, while italic is used for deterministic quantities. The superscript stands for Hermitian transposition and stands for Kronecker matrix multiplication. and denote real and circularly symmetric, respectively, Gaussian random variable with expectation and variance . The notation for matrices of suitable sizes represents a matrix that stacks vertically, while stacks horizontally.

## Ii System Model

We consider a slotted random access system with UEs. The user activity (random) variable equals if UE is active in the given block, which happens with probability , and otherwise. When active, UE transmits over , in general, complex symbols of the time-frequency grid the identification signature

(1) |

The UE signatures are subject to the energy constraint . We assume that the time-frequency grids of different users are aligned, which requires time synchronization^{1}^{1}1Our model may be extended to include frame asynchronicity (with symbol-level synchronization intact) by using cyclic-extended signature waveforms based, for example, on Gabor frames or Kerdock codes [4].. Moreover, focusing on a scenario with a potentially massive number of UEs, the signatures are assumed to be nonorthogonal. Assuming a block-fading model with coherence time-frequency span no smaller than that occupied by the signatures’ transmission, the signal
received at RRH , , reads

(2) |

where and are the large- and small-scale fading coefficients, respectively, for the link between the -th UE and the -th RRH, and is an additive noise vector. We further assume that the coefficients are independent identically distributed (i.i.d.) and the elements of the noise vector are i.i.d. . The large scale fading coefficients are assumed to be known to the RRHs and to the CU as in, e.g., [6], in contrast to the unknown small scale fading coefficients . We assume that the UEs, RRHs and the CU know the small-scale fading statistics and the probability of UE activation .

The fronthaul capacity limitations are expressed in terms of the number of bits per received complex sample, available for transmission on the fronthaul link between RRH and the CU. The system model is illustrated in Fig. 1. To elaborate further, we rewrite the received signal (2) as

(3) |

where the columns of represent the UEs’ signatures; is a diagonal matrix with , , on the main diagonal; is a vector of the small-scale fading coefficients; and is a diagonal matrix with vector on the main diagonal. We can further simplify (3) as

(4) |

with the definitions and . Note that the columns of are the signatures of the UEs scaled with the corresponding large scale fading coefficients of the links between the UEs and -th RRH, which are assumed to be known to the RRH, while depends on the unknown user activity and small-scale fading variables.

The model (4) can be interpreted within a Bayesian formulation of the sparse detection problem (see, e.g., [8]). In fact, the unknown is a Bernoulli-Gaussian random vector, with entries , being equal to the channel with probability , accounting for the case of an active UE (), or else equal to zero with probability when UE is inactive (). When is small, and in the absence of fronthaul capacity limitations, UAD hence translates into estimating the support of a sparse i.i.d. Bernoulli-Gaussian vector from a Multiple Measurement Vector (MMV) model (see, e.g., [8]), as investigated in [6].

## Iii Fronthaul and UAD Processing

In this section, we discuss two baseline schemes that account for fronthaul capacity limitations, namely QF and DtF.

### Iii-a Quantize-and-forward (QF)

With QF, each RRH quantizes the measurement in (4) with bits per sample, and forwards the quantized samples to the CU. The signal received by the CU on the -th fronthaul can hence be written as

(5) |

where is a quantization function, applied element-wise to the entries of the argument vector, with resolution bits. We assume here that the function amounts to two scalar uniform quantizers with levels applied to the real and imaginary parts of the entries of . The dynamic range is selected so as to capture three standard deviations for both positive and negative values of each real component.

The overall signal retrieved by the CU from the fronthaul links can then be expressed in a compact fashion by defining as the diagonal matrix with the vector of long-term fading coefficients on the main diagonal along with the unknown vector . In particular, we can write

(6) |

where we have , , and is to be understood as being the same as whenever it is applied to a component of coming from .

The CU performs UAD based on the received signal (6), by implementing a sparsity-based reconstruction algorithm, e.g. in the spirit of compressive sensing (CS), that aims at estimating the support of the Bernoulli-Gaussian vector from the linear mixture , with , as observed after the application of the sample-by-sample non-linearity . In particular, the unknown vector in (6) is characterized by group sparsity, since each subvector equals an all-zero vector if UE is not active, i.e., if , and is generally non-zero when . For the purpose of signal reconstruction, here we adopt the Hybrid Generalized Approximate Message Passing (H-GAMP) method developed in [9], [10], which extends over the Generalized AMP scheme (GAMP) [11] and accommodates both nonlinear measurements (i.e. quantization) and group sparsity. Details of the GAMP implementation for de-quantization in compressive sensing may be found in [12].

H-GAMP, as GAMP, is based on a quadratic approximation of the sum-product message passing scheme and operates by exchanging messages on the factor graph that describes the joint distribution . More precisely, since the H-GAMP algorithm operates on real-valued variables, we first redefine the signal model (6) as follows: (i) each entry of the vectors and is substituted by two real entries corresponding to its real and imaginary parts; (ii) each of the entries of matrix is substituted by the submatrix . Note that each subvector is now of size , instead of , but the group sparsity properties of the vector are preserved.

Denoting by and the real-valued vectors introduced above, the joint distribution of , and over which the H-GAMP algorithm operates factors as

(7) |

where denotes the index of the UE that corresponds to entry ; , with being the -th row of ; and . In (7), we have ; amounts to the probability density function if , and to a Kronecker delta function centred at if ; and

(8) |

where is the inverse of the component of the quantization function that applies to the entry and denotes the Gaussian probability density function with mean and variance . Note that the factorization (7) uses the fact that, for a given UE activity pattern , the small-scale channel coefficients in are independent.

The H-GAMP algorithm, which is detailed in [9] and [10], can be directly applied to (7) to output an approximation of the posterior distributions for all UE . From these probabilities, the log-likelihood ratio (LLR) associated with the belief that UE is active is computed as . Based on the LLR , the CU estimates the user activity variable as if for some threshold and otherwise. The details of the H-GAMP implementation are provided in Appendix A.

Remark 1: In the presence of a packetized fronthaul transmission, e.g., via Ethernet, instead of a rate constraint on the fronthaul link, it is relevant to consider a constraint on the overall number of bits that the RRH can communicate to the CU. In this case, it is possible to trade the signature length, say , with the number of bits per complex sample, say , under the constraint . In the single measurement vector (SMV) setting, this problem has been studied in the framework of recovery of sparse signals from -bit measurements in [13],[14], and for quantized measurements with multiple quantization levels in [15].

### Iii-B Detect-and-forward (DtF)

With DtF, each of the RRHs performs a local estimate of the UEs’ activity pattern and then forwards quantized soft information on these estimates to the CU over the capacity limited fronthaul links. Specifically, each RRH calculates , which is the LLR associated with the belief of UE being active based on the observation in (4). This can be done by means of the H-GAMP following the same approach discussed above. Each RRH then quantizes each LLR as , where is a scalar quantization function applied to the (real-valued) argument. Given the fronthaul rate , the quantizer has a number of levels equal to since there are LLRs to quantize with a total of bits. The dynamic range is selected based on preliminary Monte Carlo simulations to capture a 95% confidence interval.

UAD detection is finally performed at the CU by summing the LLRs obtained from all the RRHs. Specifically, for the UE , the CU computes and then applies a threshold test on the resulting LLR . We observe that this test is optimal, in the case of unquantized LLRs, in case the observations of the RRHs are conditionally i.i.d. given the transmitted signatures, while this is, in general, not the case here due to different large-scale fading coefficients . Generalized rules that capture asymmetries in the quality of the observations at the RRHs can be devised but they will not be further investigated here.

Remark 2: In this letter, we consider scalar quantization for both QF and DtF. It should be mentioned that improved performance could be obtained by leveraging compression techniques in lieu of scalar quantization. For instance, for DtF, the RRH could first perform an estimate of the Bernoulli vector associated with the UE activity pattern and then compress it losslessly using the knowledge of the probability (see [16] for related discussion on the compression of sparse sources).

## Iv Numerical Results

In this section, we perform numerical analysis with the aim to capture the relative merits of the standard C-RAN implementation with centralized baseband processing, and the alternative RRH-CU functional split in which part of the baseband processing is performed at the RRHs, under fronthaul capacity constraints. In particular, we compare the UAD performance of QF and DtF as a function of the key system parameters given by the number of the UEs , the probability of UE activation , the signature length , the number of RRHs , and the fronthaul capacity .

For simplicity, in the following we assume a dense network with large-scale fading coefficients for all pairs of UEs and RRHs. The average signal-to-noise ratio (per system user) is defined as . In the following, we assume the UE signature vectors to be random, with i.i.d. circularly-symmetric complex Gaussian entries, for which the convergence of approximate message passing schemes has been studied rigorously [17], [11, 9]. We note that the described schemes are not tied to the specific choice of the signature sequences, and other constructions, e.g., based on Gabor frames or Kerdock/Reed-Mueller codes, may be used. In that case, however, the convergence of H-GAMP and approximate message passing in general has to be addressed accordingly (see [18] for related discussion).

Fig. 2 investigates the effect of the number of RRHs on the UAD performance of the QF scheme. Specifically, we plot the correct detection ratio versus the false alarm ratio, where the former is the ratio of the number of correctly detected users over the number of active users, and the latter is the ratio of the inactive UEs that are detected as being active over the number of active users. Note that these performance metrics reflect the classical trade-off in hypothesis testing between the two types of probability of error, accounting for incorrect detection and false alarm events. Furthermore, the curves are obtained by varying the thresholds in the tests introduced in the text above. The fronthaul rate is fixed to bits per (complex) symbol for , so that the total number of fronthaul bits per packet is (see Remark 1). We also consider a shorter signature of , in which case we set in order to keep the same total number of fronthaul bits.

From Fig. 2, we first observe the expected trade-off between correct detection ratio and false alarm ratio. More interestingly, we note the sharp improvement in the performance with the increase of the number of RRHs , ranging here to – an effect similar to the one observed with the rank-aware CS reconstruction algorithms in [19]. Finally, we note that trading the signature length for a larger quantization depth deteriorates the performance, demonstrating the advantages of using a larger signature over using a more refined quantization.

Fig. 3 aims at investigating the impact of fronthaul limitations on the performance of QF and DtF.

The comparison between the two schemes is performed here under a fixed average false alarm ratio of (see Fig. 2 for an illustration). The signature length is and we consider different values for the number of RRHs. We observe that DtF outperforms QF for stringent fronthaul capacity constraints. This is due to the fact that the performance of QF under a small fronthaul bit budget is hampered by coarseness of the fronthaul quantization (see [20] for related discussion). Instead, DtF benefits from the local UAD processing done at the RRHs to reduce the amount of information that needs to be transmitted to the CU (see also [16]). In the complementary regime in which the fronthaul capacity is sufficiently large, QF outperforms DtF. In fact, when the quantized signals are sufficiently accurate, QF benefits from the joint processing capabilities of the CU to perform UAD on the signals received by the RRHs. This is unlike DtF, in which local UAD processing prevents the CU to have direct access to the measurements of the RRHs. Finally, it is observed that the performance of UAD saturates at (relatively) moderate values of , demonstrating that in this regime the performance is limited by the signature length rather than by the fronthaul capacity constraints.

## V Conclusions

We investigated the impact of fronthaul capacity limitations in a C-RAN architecture on the functions of random access and active UE identification in the presence of a potentially massive number UEs. In particular, we studied the performance of two baseline algorithmic solutions leveraging Bayesian sparse detection: a standard C-RAN approach based on quantize-and-forward (QF) and an alternative scheme based on detect-and-forward (DtF). Numerical results illustrate the relative merits of the two schemes. While here we have concentrated on the function of user activity detection, our framework could also be extended to integrate data transmission, by assigning a subset of sequences to serve as codewords for each of the system users. Future interesting work includes the analysis of the impact of more sophisticated compression techniques for fronthaul transfer on the performance of random access.

## Appendix A Details on H-GAMP Algorithm

### A-a Overview

Under the graphical model (7), Appendix C in [10] shows that the sum-product version of Hybrid-GAMP algorithm reduces to the GAMP procedure in [11] run in a parallel with updates of the sparsity levels. Specifically, each iteration has two stages. The first half of the iteration, labeled as the ”basic GAMP update”, is identical to the standard updates from the basic GAMP algorithm [11], treating the components as independent with sparsity level . The second half of the iteration, labeled as the ”sparsity update”, updates the sparsity levels based on the estimates from the basic GAMP half of the algorithm. The quantity is an estimate for the probability that the component belongs to an active group, i.e. that the UE with index is active, as we are dealing with nonoverlapping groups. In the following, we will use to denote the group (set) of indices of the vector associated with UE , . The details of the algorithm may be obtained directly from [9], [10] (main H-GAMP references) and [12] (which includes details about the GAMP implementation in the context of de-quantization in compressive sensing), but are summarized below for reference.

### A-B GAMP Update

Due to the assumed independence of the components during the GAMP update in iteration , the GAMP algorithm operates on the factor graph described by

(9) |

with the aim of estimating from the observation vector . In (9) the symbol denotes identity after normalization to unity. For the prior of , which is updated in each iteration , we have that with probability , and with probability .

By reserving the indices for the variable nodes, and for the factor nodes, in a belief propagation (BP) setting, the following messages are passed along the edges of the graph (9)

(10) | |||||

(11) |

In (11) the integration is over all elements of except .

From (11) the approximate marginal distribution is computed as

(12) |

Finally, the component of the estimate is computed as

(13) |

GAMP relies on a Gaussian approximation to overcome the complexity of the BP message passing scheme. Details of the implementation of the GAMP algorithm for a generalized linear mixing problem with quantized outputs are presented in [12]. The only difference here is that the distribution of the components of is updated in each iteration during the sparsity level update. Following [12], here we simply summarize the GAMP equations, with the discussed sparsity level adaptation in mind.

#### A-B1 Initialization

We start by setting/evaluating

(14) | |||||

(15) | |||||

(16) |

where the expectation and the variance are with respect to the (Bernoulli-Gaussian) prior . The initial values of the sparsity levels are set to , , where is the probability of a UE being active.

#### A-B2 Factor Update

In the factor update we first compute the linear step

(17) | |||||

(18) |

where denotes the Hadamard product (component-wise multiplication). Then, we evaluate the nonlinear step

(19) | |||||

(20) |

where is the all-ones vector.

The scalar functions and , applied over the components of , are defined as

(21) | |||||

(22) |

where the expectation and the variance are evaluated with respect to .

#### A-B3 Variable Update

In the variable update we first compute the linear step

(23) | |||||

(24) |

where denotes component-wise exponentiation. Then, we evaluate the nonlinear step

(25) | |||||

(26) |

where the superscript in is used to denote that the distribution of is updated in each iteration . The scalar functions and are applied component-wise and are given by

(27) | |||||

(28) |

The expected value and the variance are evaluated with respect to

(29) |

The above equations may be interpreted as a denoising process operating on the scalar variables and , where: with probability , and with probability ; is an AWGN-corrupted version of , namely

(30) |

### A-C Sparsity Level Update

Based on the output of the GAMP part of the iteration, the sparsity update stage updates the quantity which, we recall, represents an estimate for the probability that the UE with index is active.

We start by defining the message

(31) |

As we are dealing with nonoverlapping groups, the message (31) is the ratio of likelihood of the output given that belongs to an active group (i.e. ) to the likelihood given that does not belong to an active group (i.e. ). From (30) we have

(32) |

where denotes the pdf of a Gaussian scalar random variable with mean and variance .

Defined in this way, may be understood as a (local) estimate of the log-likelihood ratio

(33) |

The estimate is updated by ”collecting” the messages corresponding to all indices from the group (except )

(34) |

Finally, the procedure returns the estimate

(35) |

which is used in the next iteration of the GAMP update part of the algorithm.

## References

- [1] A. Checko, H. L. Christiansen, Y. Yan, L. Scolari, G. Kardaras, M. S. Berger, and L. Dittmann, “Cloud RAN for mobile networks – a technology overview,” IEEE Communications Surveys Tutorials, vol. 17, no. 1, pp. 405–426, 2015.
- [2] A. Laya, L. Alonso, and J. Alonso-Zarate, “Is the random access channel of LTE and LTE-A suitable for M2M communications? A survey of alternatives,” IEEE Communications Surveys Tutorials, vol. 16, no. 1, pp. 4–16, 2014.
- [3] A. K. Fletcher, S. Rangan, and V. K. Goyal, “On-off random access channels: A compressed sensing framework,” arXiv preprint arXiv:0903.1022, 2009.
- [4] Y. Xie, Y. Chi, L. Applebaum, and R. Calderbank, “Compressive demodulation of mutually interfering signals,” in Proc. IEEE Statistical Signal Processing Workshop (SSP), Aug 2012, pp. 592–595.
- [5] G. Wunder, P. Jung, and W. Chen, “Compressive Random Access for Post-LTE Systems,” in Proc. IEEE ICC’14 Workshop on Massive Uncoordinated Access Protocols (MASSAP), Sydney, Australia, Jun. 2014.
- [6] X. Xu, X. Rao, and V. K. N. Lau, “Active user detection and channel estimation in uplink cran systems,” in Proc. IEEE International Conference on Communications (ICC), June 2015, pp. 2727–2732.
- [7] U. Dötsch, M. Doll, H.-P. Mayer, F. Schaich, J. Segel, and P. Sehier, “Quantitative analysis of split base station processing and determination of advantageous architectures for LTE,” Bell Labs Technical Journal, vol. 18, no. 1, pp. 105–128, 2013.
- [8] S. Foucart and H. Rauhut, A mathematical introduction to compressive sensing. Springer, 2013.
- [9] S. Rangan, A. Fletcher, V. Goyal, and P. Schniter, “Hybrid generalized approximate message passing with applications to structured sparsity,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT), July 2012, pp. 1236–1240.
- [10] S. Rangan, A. K. Fletcher, V. K. Goyal, and P. Schniter, “Hybrid approximate message passing with applications to structured sparsity,” arXiv:1111.2581v2 [cs.IT], 2011.
- [11] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT), July 2011, pp. 2168–2172.
- [12] U. S. Kamilov, V. K. Goyal, and S. Rangan, “Message-passing de-quantization with applications to compressed sensing,” IEEE Transactions on Signal Processing, vol. 60, no. 12, pp. 6270–6281, Dec 2012.
- [13] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Trans. Inform. Theory, vol. 59, no. 4, pp. 2082–2102, April 2013.
- [14] Y. Plan and R. Vershynin, “Dimension reduction by random hyperplane tessellations,” Discrete and Computational Geometry, vol. 51, no. 2, pp. 438–461, 2014. [Online]. Available: http://dx.doi.org/10.1007/s00454-013-9561-6
- [15] Y. Mrueh and L. Rosasco, “q-ary compressive sensing,” arXiv:1302.5168v1, 2013.
- [16] C. Weidmann and M. Vetterli, “Rate distortion behavior of sparse sources,” IEEE Trans. Inform. Theory, vol. 58, no. 8, pp. 4969–4992, Aug 2012.
- [17] D. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proceedings of the National Academy of Science, 2009.
- [18] F. Caltagirone, L. Zdeborova, and F. Krzakala, “On convergence of approximate message passing,” in Proc. IEEE Int. Symp. Inform. Theory, June 2014, pp. 1812–1816.
- [19] M. E. Davies and Y. C. Eldar, “Rank awareness in joint sparse recovery,” IEEE Trans. Inform. Theory, vol. 58, no. 2, pp. 1135–1146, Feb 2012.
- [20] V. K. Goyal, A. K. Fletcher, and S. Rangan, “Compressive sampling and lossy compression,” IEEE Signal Processing Mag., vol. 25, no. 2, pp. 48–56, March 2008.