Random Access for Massive Machine-Type Communications
A thesis submitted to the Graduate Research School of
The University of New South Wales
in partial fulfillment of the requirements for the degree of
Doctor of Philosophy
School of Electrical Engineering and Telecommunications
Faculty of Engineering
The University of New South Wales
As a key enabler for the Internet-of-Things (IoT), machine-type communications (MTC) has emerged to be an essential part for future communications. In MTC, a number of machines (called users or devices) ubiquitously communicate to a base station or among themselves with no or minimal human interventions. It is envisioned that the number of machines to be connected will reach tens of billions in the near future. In order to accommodate such a massive connectivity, random access schemes are deemed as a natural and efficient way. Different from scheduled multiple access schemes in existing cellular networks, users with transmission demands will access the channel in an uncoordinated manner via random access schemes, which can substantially reduce the signalling overhead. However, the reduction in signalling overhead may sacrifice the system reliability and efficiency, due to the unknown user activity and the inevitable interference from contending users. This seriously hinders the application of random access schemes in MTC. Therefore, this thesis is dedicated to studying methods to improve the efficiency of random access schemes and to facilitate their deployment in MTC.
In the first part of this thesis, we design a joint user activity identification and channel estimation scheme for grant-free random access systems. We first propose a decentralized transmission control scheme by exploiting the channel state information (CSI). With the proposed transmission control scheme, we design a compressed sensing (CS) based user activity identification and channel estimation scheme. We analyze the packet delay and throughput of the proposed scheme and optimize the transmission control scheme to maximize the system throughput.
The second part of this thesis focuses on the design and analysis of a random access scheme, i.e., the coded slotted ALOHA (CSA) scheme, in the presence of channel erasures, to improve the system throughput. First, we design the code probability distributions for CSA schemes with repetition codes and maximum distance separable (MDS) codes to maximize the expected traffic load, under both packet erasure channels and slot erasure channels. In particular, we derive the extrinsic information transfer (EXIT) functions of CSA schemes over the two erasure channels. By optimizing the convergence behavior of the derived EXIT functions, we obtain the code probability distributions to maximize the expected traffic load. Then, we derive the asymptotic throughput of CSA schemes over erasure channels for an infinite frame length, which is verified to well approximate the throughput for CSA schemes with a practical frame length. Numerical results demonstrate that the designed code distributions can maximize the expected traffic load and improve the throughput for CSA schemes over erasure channels.
In the third part of this thesis, we concentrate on designing efficient data decoding algorithms for the CSA scheme, to further improve the system efficiency. First, we present a low-complexity physical-layer network coding (PNC) method to obtain linear combinations of collided packets. Then, we design an enhanced message-level successive interference cancellation (SIC) algorithm to exploit the obtained linear combinations to recover more users’ packets. In addition, we propose an analytical framework for the PNC-based decoding scheme and derive a tight approximation of the system throughput for the proposed scheme. Furthermore, we optimize the CSA scheme to maximize the system throughput and energy efficiency, respectively.
This work would not have been done without the encouragement and the support from many people I have met during my Ph.D. journey.
First and foremost, I would like to thank my supervisor Prof. Jinhong Yuan, who gave me the opportunity to pursue this degree in the first place, and has been always supporting me all these years. He guided me into good research topics, helped me formulate research problems and was generous to share his time on my doubts and confusions. His rigor and conciseness shaped my style and improved my skills in presenting complex technical ideas in papers. It is my pleasure and honor to be his student.
Second, I would like to thank my co-supervisor Dr. Lei Yang, for his guidance, constructive suggestions, and help on my research. Lei is very kind that each time he proposed to help me before I even asked. His hard working and dedication also impressed me a lot.
I am also grateful to work with Dr. Derrick Wing Kwan Ng. Derrick is so gifted and so hard working at the same time. His remarkable perception on research and his efficient working style have been invaluable.
I also would like to thank Dr. Tao Yang, who had supported me in my first year of Ph.D study. Special thanks to Dr. Yixuan Xie for helping me get a better understanding of coding theory. Many thanks to Prof. Marco Chiani at the University of Bologna, Italy, for his valuable contributions on my second research work.
I would also like to thank all my friends and colleagues at the Wireless Communication Lab at UNSW, for many discussions we had leading to a better understanding of wireless communication theories, and the great pleasure we have working together.
Finally, I would like to give my deepest appreciation to my beloved parents for everything they have done for me. I cannot thank enough to my husband for his understanding and unconditional support. I would also like to express the special thanks to my daughter, who brings me many happy times. None of this would happen without them. I would like to dedicate this thesis to my parents, my husband, and my daughter.
List of Publications
Z. Sun, Y. Xie, J. Yuan and T. Yang, “Coded Slotted ALOHA for Erasure Channels: Design and Throughput Analysis,” IEEE Trans. Commun., vol. 65, no. 11, pp. 4817-4830, Nov. 2017.
Z. Sun, L. Yang, J. Yuan and D. W. K. Ng, “Physical-Layer Network Coding Based Decoding Scheme for Random Access,” in IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3550-3564, Apr. 2019.
Z. Sun, Z. Wei, L. Yang, J. Yuan, X. Cheng, W. Lei, “Exploiting Transmission Control for Joint User Identification and Channel Estimation in Massive Connectivity,” IEEE Trans. Commun., accepted, May 2019.
Z. Sun, Y. Xie, J. Yuan, and T. Yang, “Coded slotted ALOHA schemes for erasure channels,” in Proc. IEEE Intern. Commun. Conf. (ICC), Kuala Lumpur, May 2016, pp. 1-6.
Z. Sun, L. Yang, J. Yuan, and M. Chiani, “A novel detection algorithm for random multiple access based on physical-layer network coding,” in Proc. IEEE Intern. Commun. Conf. Workshops (ICC), Kuala Lumpur, May 2016, pp. 608-613.
Z. Sun, Z. Wei, L. Yang, J. Yuan, X. Cheng, and L. Wan, “Joint User Identification and Channel Estimation in Massive Connectivity with Transmission Control,” in IEEE Intern. Sympos. on Turbo Codes and Iterative Inf. Process. (ISTC), Hong Kong, Dec. 2018, pp. 1-5.
|AMP||Approximate message passing|
|APP||A posteriori probability|
|AWGN||Additive white Gaussian noise|
|BEC||Binary erasure channel|
|BER||Bit error rate|
|BPDN||Basis pursuit denoising|
|BPSK||Binary phase-shift keying|
|CDF||Cumulative distribution function|
|CRDSA||Contention resolution diversity slotted ALOHA|
|CSA||Coded slotted ALOHA|
|CSI||Channel state information|
|CSMA||Carrier-sense multiple access|
|DMC||Discrete memoryless channel|
|ECC||Error control coding|
|EXIT||Extrinsic information transfer|
|FDMA||Frequency division multiple access|
|IHT||Iterative hard thresholding|
|i.i.d.||Independent and identically distributed|
|IRSA||Irregular repetition slotted ALOHA|
|IST||Iterative soft thresholding|
|JUICE||Joint user identification and channel estimation|
|LASSO||Least absolute shrinkage and selection operator|
|MAC||Media access control|
|MDS||Maximum distance separable|
|MIMO||Multiple-input multiple output|
|MMSE||Minimum mean square error|
|MSE||Mean squared error|
|mMTC||Massive machine type communication|
|NMSE||Normalized mean square error|
|NOMA||Non-orthogonal multiple access|
|OMA||Orthogonal multiple access|
|OMP||Orthogonal matching pursuit|
|PAM||Pulse amplitude modulation|
|Probability density function|
|PEC||Packet erasure channel|
|PLR||Packet loss rate|
|PNC||Physical-layer network coding|
|QPSK||Quadrature phase-shift keying|
|SBL||Sparse bayesian learning|
|SIC||Successive interference cancellation|
|TDD||Time division duplex|
List of Notations
Lowercase letters denote scalars, boldface lowercase letters denote vectors, and boldface uppercase letters denote matrices.
|A column vector|
|Conjugate transpose of|
|The element in the row and the column of|
|Absolute value (modulus) of the scalar|
|The set of all matrices with real-valued entries|
|The set of all matrices with complex-valued entries|
|-norm of a vector or a matrix,|
|A zero matrix|
|Real part of a complex number|
|Imaginary part of a complex number|
|Statistical expectation with respect to random variable|
|A complex Gaussian random variable with mean and variance|
|The computation complexity is order operations|
|The sign of|
|A function equaling if and otherwise equaling zero|
|Multiplication in Galois field|
|Rank of a matrix|
|Modulo operation of and|
- List of Publications
- List of Notations
- 1.1 Introduction and Challenges
- 1.2 Literature Review
- 1.3 Thesis Outline and Contributions
- 2.1 Bayes’ Theorem and Compressed Sensing
- 2.2 Modern Coding Techniques
- 2.3 Physical-layer Network Coding
- 2.4 Introduction to Wireless Communicaiton
- 2.5 Chapter Summary
3 Joint User Activity Identification and Channel Estimation in Random Access Systems for mMTC
- 3.1 Introduction
- 3.2 System Model
- 3.3 The Proposed Transmission Control Scheme
- 3.4 AMP based User Identification and Channel Estimation
- 3.5 Performance Analysis and Optimal Control Threshold Design
- 3.6 Numerical Results
- 3.7 Chapter Summary
- 3.8 Appendix
4 Design and Analysis of Coded Slotted ALOHA for mMTC under Erasure Channels
- 4.1 Introduction
- 4.2 System Model and Problem Formulation
- 4.3 Bipartite Graph and EXIT Chart Analysis
- 4.4 Design of Coded Slotted ALOHA for Erasure Channels
- 4.5 Performance Analysis of CSA Schemes Over Erasure Channels
- 4.6 Numerical Results
- 4.7 Chapter Summary
- 4.8 Appendix
5 Physical-layer Network Coding based Decoding Scheme in Random Access Systems for mMTC
- 5.1 Introduction
- 5.2 System Model and Problem Formulation
- 5.3 Physical-layer Networking Coding based Decoding Scheme
- 5.4 Performance Analysis
- 5.5 Numerical Results
- 5.6 Chapter Summary
- 6 Conclusions and Future Research Topics
List of Figures
- 2.1 Convolutional encoder with rate .
- 2.2 The Tanner graph.
- 2.3 VN and CN decoders.
- 2.4 A PNC strategy for two-way relay channels.
- 2.5 Binary erasure channel.
- 2.6 Multiple access system with users and a common receiver.
- 2.7 ALOHA scheme with three transmitters.
- 2.8 Slotted ALOHA scheme with three transmitters.
- 3.1 The proposed transmission control scheme for user , .
- 3.2 Comparison of the proposed denoiser, the conventional MMSE denoiser, and the soft denoiser, where , dB, , , and . The soft denoiser is given by , where is the indicator function.
- 3.3 Simulated state evolution for conventional MMSE denoiser and proposed MMSE denoiser with , , , , and dB.
- 3.4 Optimal control threshold for and dB.
- 3.5 Simulated and analytical missed detection probability for and .
- 3.6 Comparison of NMSE for the conventional scheme without transmission control and the proposed scheme with , , and .
- 3.7 Simulated and analytical CDFs of packet delay for the proposed scheme and the conventional scheme with , , , , and .
- 3.8 Simulated and analytical average packet delay for , , and .
- 3.9 Simulated and analytical network throughput for and .
- 4.1 CSA system model with erasure channels.
- 4.2 EXIT chart for repetition codes with distribution , packet erasure probability , and .
- 4.3 EXIT chart for MDS codes with distribution , packet erasure probability , and .
- 4.4 EXIT chart for repetition codes with distribution , short erasure probability , and .
- 4.5 EXIT chart for MDS codes with distribution , short erasure probability , and .
- 4.6 Expected traffic load versus packet erasure rate for MDS codes with and different .
- 4.7 Expected traffic load versus packet erasure rate for MDS codes with and different .
- 4.8 Expected traffic load versus packet erasure rate for MDS codes with , , and different .
- 4.9 Throughput versus traffic load for , , , and packet erasure rate .
- 4.10 Packet loss rate versus traffic load for , , and packet erasure rate .
- 4.11 Simulated and asymptotic for packet erasure channels with various .
- 4.12 Throughput versus traffic load for , , , and slot erasure rate .
- 4.13 Simulated and asymptotic for erasure channels with various .
- 5.1 A graph representation of the enhanced message-level SIC algorithm.
- 5.2 Throughput of the enhanced message-level SIC algorithm by canceling the NCNs with different degrees for .
- 5.3 An example of the proposed decoding algorithm with users and time slots.
- 5.4 CDF of for and dB.
- 5.5 PDF of for and dB.
- 5.6 CDF of for and dB.
- 5.7 PDF of for and dB.
- 5.8 The simulated and analytical number of obtained NC messages per time slot for users.
- 5.9 The simulated and analytical throughput for SNR dB, , , and .
- 5.10 Optimal number of replicas versus offered traffic load for SNR dB, , and .
- 5.11 The comparison of the number of obtained NC messages per time slot for users.
- 5.12 Throughput versus offered traffic load for SNR dB, , and .
- 5.13 Simulated energy efficiency for SNR dB and different .
List of Tables
- List of Notations
- 4.1 The UN Degree Distributions For Repetition Codes With Packet Erasure , , , .
- 4.2 The UN Degree Distributions For MDS Codes With Packet Erasure , .
- 4.3 The UN Degree Distributions For Repetition Codes With Slot Erasure , , , .
- 4.4 The UN Degree Distributions for MDS Codes with Slot Erasure , .
- 4.5 The UN Degree Distributions For Repetition Codes With Packet Erasure , , , .
- 4.6 The UN Degree Distributions For MDS Codes With Packet Erasure , , , , .
- 4.7 The UN Degree Distributions For Repetition Codes With Slot Erasure , , , .
- 4.8 The UN Degree Distributions For MDS Codes With Slot Erasure , , , , .
Chapter 1 Overview
1.1 Introduction and Challenges
Driven by the proliferation of new applications in the paradigm of Internet-of-Things (IoT), e.g. smart home, autonomous driving, smart industry, etc., machine-type communications (MTC) has emerged as an essential part for future communications [1, 2, 3]. For MTC, a number of machine type devices or users need to communicate to a base station (BS) or among themselves with no or minimal human interventions [4, 5, 6, 7, 8, 9, 10, 11], which will undoubtedly improve our life quality and bring great business opportunities.
One important type of MTC is massive MTC (mMTC), where a massive number of users sporadically transmit short packets to the BS. Compared to the mature human-centric communication, mMTC possesses many distinctive features [12, 1, 13, 14], which include the massive connectivity requirement, the sporadic traffic pattern, and the small size of transmitted packets. These features of mMTC yield existing protocols designed for human-centric communications significantly inefficient and call for radical changes in the communication protocol. For example, in human-centric cellular systems, the commonly adopted user access approach is the grant-based communication protocols [15, 16, 17]. In particular, each user with the accessing demand selects and transmits a pilot sequence to request an access grant from the BS, prior to the data transmission. After being granted, the BS allocates resource blocks to the accessed users for their following data transmission. Due to the small size of transmitted packets for mMTC, such a signalling overhead from requesting access makes the grant-based protocols very inefficient. In addition, as the users select pilot sequences without coordination, two or more users may select the same sequence simultaneously and thus a collision occurs. In this case, the collided users cannot get the access grant and reattempt to send the grant request after waiting for a random duration. With the increasing number of users, the collision probability becomes high and a large number of users need to request the access grant twice or more times. This results in an intolerant access delay for the massive connectivity system [18, 19, 20, 21]. Therefore, designing the efficient user access approach is very desirable for mMTC.
A new communication protocol, called grant-free random access scheme, was proposed and has achieved the industrial and academic consensus on its applicability for mMTC [18, 22, 19, 23, 24]. By contrast to the grant-based schemes, each user directly transmits its pilot and payload data in one shot, once it has a transmission demand . It implies that no access requesting procedure is required in the grant-free random access schemes. As a result, the signalling overhead from requesting access is eliminated and the access latency is significantly reduced, which stimulates the application of grant-free random access schemes to mMTC.
While attractive features of the grant-free communication exist, some key issues still remain to be addressed. Firstly, without the access grant procedure, the BS needs to identify the user activity from received pilot sequences. Unfortunately, due to the massive number of users in the system and the limited channel coherence time, it is impossible to allocate orthogonal pilot sequences to all users [19, 25]. Hence, the user activity cannot be simply identified by exploiting the orthogonality among pilot sequences, which imposes a challenge for mMTC. In addition, the utilization of non-orthogonal pilots causes severe inter-user interference for channel estimation and the conventional pilot-based channel estimation techniques [26, 27] are not applicable to mMTC with invoking non-orthogonal pilots. Therefore, the accurate channel estimation is another difficulty for mMTC.
Furthermore, in the grant-free random access system, the users transmit their payload data in an uncoordinated way, which results in the inevitable data collisions and dramatically deteriorates the system efficiency. For the conventional random access systems, e.g. ALOHA , the collided data packets are directly discarded and the corresponding users keep retransmitting until their packets can be successfully recovered by the receiver. However, the ultra high connectivity density of mMTC significantly increases the collision probability, which causes the frequent retransmissions and then an intolerant delay. Therefore, instead of disregarding the collided packets, it desires to wisely exploit the packet collisions to retrieve more packets and to improve the system efficiency in mMTC [29, 30, 31, 32, 33, 34, 35]. Unfortunately, the tremendous number of users causes a large collision size, which makes the efficient collision resolution and data decoding very challenging.
This thesis is devoted to tackling the aforementioned challenges for mMTC in future wireless networks, including the user activity identification, the channel estimation, and the design of efficient data detection schemes.
In the first part of this thesis, we focus on the design of joint user activity identification and channel estimation scheme for the grant-free random access system. In particular, we propose a decentralized transmission control scheme by exploiting the channel state information (CSI) at the user side. By characterizing the impact of the proposed transmission control scheme on the distribution of received signals, we design a compressed sensing (CS) based joint user activity identification and channel estimation scheme. We analyze the user activity identification performance via employing a state evolution technique . Additionally, the system performance in terms of the packet delay and the network throughput is analyzed for the proposed scheme. Based on the analysis, we optimize the introduced transmission control strategy to maximize the network throughput.
In the second part of this thesis, the focus is on designing the transmission scheme for the random access system, particularly the coded slotted ALOHA (CSA) scheme , in order to facilitate the collision resolution and date decoding in mMTC. In particular, we design the code probability distributions for CSA schemes with repetition codes and maximum distance separable (MDS) codes over packet erasure channels and slot erasure channels. In order to characterize the impact of channel erasures on the design of code probability distributions, we first derive the extrinsic information transfer (EXIT) functions for the CSA schemes over the two erasure channels, respectively. By optimizing the convergence behavior of derived EXIT functions, we optimize the code probability distributions to achieve the maximum expected traffic load. Finally, we derive the asymptotic throughput of the CSA scheme over erasure channels by considering an infinite frame length, to theoretically evaluate the system performance of CSA schemes with designed code probability distributions.
The third part of this thesis is mainly dedicated to proposing an efficient and low-complexity data decoding scheme to further improve the system efficiency of CSA systems. We first propose an enhanced low-complexity physical-layer network coding (PNC)-based data decoding scheme to obtain linear combinations of collided packets. Then, we design an enhanced message-level successive interference cancellation (SIC) algorithm to wisely exploit the obtained linear combinations and to improve the system throughput. Moreover, we propose an analytical framework for the PNC-based decoding scheme in the CSA system and derive an accurate approximation for the system throughput of the proposed scheme. With employing the proposed data decoding scheme, we optimize the transmission scheme to further improve the system throughput and energy efficiency of the CSA system, respectively.
1.2 Literature Review
In this section, we provide an intensive review of the existing works in dealing with the challenging issues of mMTC mentioned in the last section, which will be briefly summarized in the following.
1.2.1 User Identification and Channel Estimation
In the grant-free random access system, the employment of non-orthogonal pilot sequences among users causes the accurate user identification very difficult [37, 38]. Fortunately, the sporadic transmission of mMTC, i.e., the fact that the number of active users in a specific time is much smaller than the number of potential users, provides a possibility to deal with this difficulty by exploiting the compressed sensing (CS) techniques [39, 40, 36]. In particular, if all users’ signals are collected as a signal vector and the inactive users’ signals are considered as zeros, the identification of user activity is equivalent to detecting the support set of the received signal vector. Due to the sporadic transmission, the received signal vector is sparse and its support set can be detected by adopting the CS techniques. In the literature, the CS techniques have been intensively exploited to identify the active users for the grant-free random access systems. In [41, 42, 43, 44, 45], the CS algorithms were employed to jointly identify users’ activity and detect their data by assuming the perfect users’ CSI at the receiver, which have demonstrated the great potentials of CS techniques for the sparse user activity identification. However, the perfect receiver-side CSI is an impractical assumption in mMTC. In practice, the users’ CSI has to be estimated by the receiver.
In conjunction with the user activity identification, the channel estimation for grant-free random access systems has been studied in [46, 47, 48, 49, 50, 51, 52, 53]. In , the author derived an upper bound on the overall transmission rate for the multiple access channel with massive connectivity and presented a practical two-phase scheme to approach the upper bound. In this proposed scheme , the user identification and channel estimation are jointly performed by using CS techniques in the first phase. The data detection is executed in the second phase by using conventional multiuser detection (MUD) techniques . Note that, the data detection performance highly relies on the accuracy of the user identification and channel estimation in the first phase. Therefore, improving the joint user identification and channel estimation (JUICE) performance is essential to increase the efficiency of grant-free random access systems. To improve the JUICE performance, several algorithms were proposed in [47, 48, 49, 50, 51]. In , the authors designed a greedy CS algorithm based on the orthogonal matching pursuit. By exploiting the statistical CSI, the Bayesian CS method was modified and applied to JUICE in cloud radio access networks . The authors in  introduced a one-shot random access procedure and analyzed the achievable rate by using the standard basis pursuit denoising. The computationally efficient approximate message passing (AMP) algorithm was employed to identify the user activity and to estimate channels in [50, 51]. The paper  demonstrated the great benefits of the AMP algorithm combined with massive multiple-input multiple output (MIMO) technique for enhancing the JUICE performance in the mMTC. In , a novel transmission scheme for MTC was introduced, where the information bits are embedded into the pilot sequences. As a result, when performing the JUICE, the data detection can also be achieved. These CS algorithms developed in [47, 48, 49, 50, 51, 52, 53] demonstrated that the JUICE problem can be effectively solved by CS methods. Note that, both the sparsity of user activity and the strength of received signals can affect the performance of CS algorithms . These two aforementioned factors can be controlled in designing novel transmission schemes. However, most works mainly focus on the design of CS algorithms at the receiver side to improve the system performance of mMTC in the literature. Designing transmission schemes at the user side for the CS-based user activity identification and channel estimation has not been addressed, but it could potentially provide a significant improvement on the system performance of mMTC.
1.2.2 ALOHA based Random Access Schemes
For the classical ALOHA random access schemes [28, 55, 56], the collided packets are directly abandoned and the corresponding users keep retransmitting until their packets are successfully recovered. It is obvious that such an access mechanism seriously limits the system performance improvement and results in an intolerant delay, particular for the massive connectivity of mMTC. Therefore, the collision resolution mechanism is very desirable to enhance the system efficiency.
From a collision resolution point of view, several variants of ALOHA have been proposed over last few years. Among them, the contention resolution diversity slotted ALOHA (CRDSA) scheme introduces the SIC technique to resolve packet collisions . In other words, each packet is transmitted twice at two random slots within one frame. The two replicas know the location of their respective copy by using a pointer. When one copy is received in a collision-free slot and recovered successfully, the pointer is extracted and the interference generated by its twin replica can be removed from the corresponding slot. The process of recovering packets is performed iteratively, until no more collision-free slots exist or all packets are recovered successively. This iterative process results in an improved throughput, compared to the ALOHA and slotted ALOHA (SA) schemes .
Recently, the CRDSA scheme has been further enhanced by transmitting a variable number of replicas of a packet in one frame, named as irregular repetition slotted ALOHA (IRSA). For the IRSA scheme, the SIC process of CRDSA scheme is represented by a bipartite graph and the threshold behavior of iterative packet recovery process can be analyzed. In particular, given a IRSA scheme, there exists a traffic load threshold, which is the largest traffic load such that all but a vanishing small fraction of users’ packets can be recovered successfully for large frame sizes. Moreover, compared to the CRDSA scheme, a higher throughput is achieved by designing a repetition code probability distribution in the IRSA scheme. Based on that, the CSA scheme was proposed as a further generalization of the IRSA scheme. Before the transmission, the packet from each user is partitioned and encoded into multiple packets via local packet-oriented codes  at the media access control (MAC) layer. At the receiver side, the SIC process is combined with the decoding of packet-oriented codes to recover collided packets . Compared to the IRSA scheme, the CSA scheme achieves a much higher peak throughput for medium code rates. Most of the existing CSA designs are based on the assumption of collision channels without erasures or noise in the literature [57, 32], where only the effect of collisions is considered for transmitted packets. This assumption is impractical, since in practice both the channel fading and the external interference exist and they can corrupt the transmitted packets. Therefore, it is crucial to design the CSA scheme over more practical channels. For packet erasure channels, the error floor of packet loss rate was analyzed for the IRSA scheme with finite frame lengths in [59, 60]. However, the design of CSA schemes over other practical channels, such as fading channels and slot erasure channels, to improve the throughput remains as an open yet important research problem, which needs to be addressed.
1.2.3 Data Detection for Random Access
For ALOHA-based random access schemes, a signal-level SIC in each time slot is adopted to explore the capture effect in [61, 62, 63, 64, 65, 66], so that more users’ packets are recovered from the collisions. Here, the capture effect means that a packet with the higher power can be successfully recovered from the received superimposition of multiple packets, when the received power of different packets are imbalanced. The scheme can efficiently improve the network throughput, when the power imbalance is significant. In particular, the paper  adopted the signal-level SIC technique to resolve collision slots in ALOHA, and  proposed a channel-aware SA scheme by designing the transmission control. With employing the signal-level SIC at the receiver, a decentralized random power transmission strategy was proposed to maximize the system throughput in , and this strategy was further extended to multiple time slots in . Moreover, a modified message passing algorithm was exploited to design the cross-layer random access scheme in [65, 66].
In , the authors proposed to obtain multiple linear combinations of collided packets in each collided time slot through an exhaustive decoding of all possible linear combinations, where the exhaustive decoding is achieved by employing PNC techniques [68, 69, 70]. The users’ packets are then recovered from all the decoded linear combinations in a MAC frame via matrix manipulations. In [71, 72], the joint utilization of MUD and PNC decoding was proposed to decode individual native packets and network-coded packets in each time slot. The decoded packets in all time slots are then exploited by a MAC-layer bridging and decoding scheme to recover users’ packets. Although the schemes in  and  provide excellent throughput performance, they suffer from a very high decoding complexity, which is less favorable for some applications which require simple machine-type devices. Therefore, the low-complexity and efficient data detection scheme needs to be proposed for random access systems in mMTC.
1.3 Thesis Outline and Contributions
Chapter 1 presents the motivation of this thesis. Chapter 2 provides an overview of some basic concepts that will be used extensively in this thesis. Chapters 3 - 5 present my novel research results on the random access for mMTC, which will be detailed in the following. In Chapter 6, the conclusion and future research topics are presented.
1.3.1 Contributions of Chapter 3
The work on the user activity identification and channel estimation for grant-free communications in mMTC is presented in Chapter 3. As discussed in Section 1.2.1, most works in the literature [47, 48, 49, 50, 51, 52, 53] focused on the design of CS algorithms for the user activity identification and channel estimation at the receiver side. In fact, the design of transmission schemes can adjust the sparsity of received signal vector and the strength of received signals to achieve an enhanced performance for mMTC. Therefore, this thesis studies how to design the transmission scheme to improve the performance of CS algorithms and the system performance. In particular, we propose a transmission control scheme for the AMP based joint user identification and channel estimation in massive connectivity networks. In the proposed transmission control scheme, a transmission control function is designed to determine a user’s transmission probability, when it has a transmission demand. By employing a step transmission control function for the proposed scheme, we derive the channel distribution experienced by the receiver to describe the effect of transmission control on the design of AMP algorithm. Based on that, we modify the AMP algorithm by designing a minimum mean squared error (MMSE) denoiser, to jointly identify the user activity and estimate their channels. We further derive the false alarm and missed detection probabilities to characterize the user identification performance of the proposed scheme. Closed-form expressions of the average packet delay and the network throughput are obtained. Furthermore, we optimize the transmission control function to maximize the network throughput. We demonstrate that the proposed scheme can significantly improve the user identification and channel estimation performance, reduce the packet delay, and boost the throughput, compared to the conventional scheme without transmission control.
These results have been published in one conference paper and one journal paper.
Z. Sun, Z. Wei, L. Yang, J. Yuan, X. Cheng, W. Lei, “Joint User Identification and Channel Estimation in Massive Connectivity with Transmission Control,” in Proc. IEEE Intern. Symposium on Turbo Codes and Iterative Inform. Processing (ISTC), Hong Kong, Dec. 2018, pp. 1-5.
Z. Sun, Z. Wei, L. Yang, J. Yuan, X. Cheng, W. Lei, “Exploiting Transmission Control for Joint User Identification and Channel Estimation in Massive Connectivity,” IEEE Trans. Commun., accepted, May 2019.
1.3.2 Contributions of Chapter 4
The work on the design of transmission schemes for the CSA system over erasure channels is presented in Chapter 4. In the literature [34, 30, 57], most of existing works focused on designing CSA schemes for the ideal collision channel without channel fading or noise, in order to improve the system throughput. To capture the effects of practical channels, this thesis proposes a new transmission design for the CSA scheme over erasure channels. In particular, we consider both packet erasure channels and slot erasure channels. We first design the code probability distributions for CSA schemes with repetition codes and MDS codes to maximize the expected traffic load. We then derive the extrinsic information transfer (EXIT) functions of CSA schemes over erasure channels. By optimizing the convergence behavior of derived EXIT functions, the code probability distributions to achieve the maximum expected traffic load are obtained. Then, we derive the asymptotic throughput of CSA schemes over erasure channels. In addition, we validate that the asymptotic throughput can give a good approximation to the throughput of CSA schemes over erasure channels.
These results have been published in one conference paper and one journal paper.
Z. Sun, Y. Xie, J. Yuan and T. Yang, “Coded slotted ALOHA schemes for erasure channels,” in Proc. IEEE Intern. Commun. Conf. (ICC), Kuala Lumpur, May 2016, pp. 1-6.
Z. Sun, Y. Xie, J. Yuan and T. Yang, “Coded Slotted ALOHA for Erasure Channels: Design and Throughput Analysis,” in IEEE Trans. on Commun., vol. 65, no. 11, pp. 4817-4830, Nov. 2017.
1.3.3 Contributions of Chapter 5
The work on the design of data decoding algorithm for the CSA system is presented in Chapter 5. As discussed in Section 1.2.3, the efficient data decoding algorithm is essential to enhance the system throughput of random access systems. By exploring the characteristic of packet transmission in the CSA scheme, this thesis proposes an effective and low-complexity data decoding algorithm for the CSA. In particular, we first propose an enhanced low-complexity binary PNC-based decoding scheme for random access systems with binary phase-shift keying (BPSK) modulation to improve the system throughput. In the proposed scheme, the linear combinations of users’ packets in each time slot are first obtained by exploiting a low-complexity PNC decoding scheme. Based on the decoded linear combinations within a MAC frame, we then propose an enhanced message-level SIC algorithm to recover more users’ packets. An analytical framework for the PNC-based decoding scheme is proposed and a tight approximation of the system throughput is derived for the proposed scheme. Subsequently, we optimize the transmission schemes of CSA systems, i.e., the number of replicas transmitted by each user, to further improve the system throughput and energy efficiency, respectively. Interestingly, the optimization results show that the optimal number of replicas for maximizing the energy efficiency is a constant for all offered loads. On the other hand, the optimal number of replicas that maximizes the system throughput decreases as the offered load increases. Numerical results show that the derived analytical results closely match with the simulation results. Furthermore, the proposed scheme achieves a considerable throughput improvement, compared to the CRDSA scheme with more than two replicas.
These results have been published in one conference paper and one journal paper.
Z. Sun, L. Yang, J. Yuan and M. Chiani, “A novel detection algorithm for random multiple access based on physical-layer network coding,” in Proc. IEEE Intern. Commun. Conf. (ICC) Workshops, Kuala Lumpur, May 2016, pp. 608-613.
Z. Sun, L. Yang, J. Yuan and D. W. K. Ng, “Physical-Layer Network Coding Based Decoding Scheme for Random Access,” in IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3550-3564, Apr. 2019.
Chapter 2 Background
This chapter presents some essential background knowledge required to understand materials presented in the subsequent chapters. In particular, the first three sections present the basic knowledge for Chapter 3, Chapter 4, and Chapter 5, respectively, which include Bayes’ theorem and compressed sensing techniques, modern coding techniques, and physical-layer network coding (PNC). The forth section introduces some fundamentals of wireless communications.
2.1 Bayes’ Theorem and Compressed Sensing
In this section, we present Bayes’ theorem and an overview of classical compressed sensing algorithms, which provide the basis for understanding the proposed joint user identification and channel estimation scheme in Chapter 3.
2.1.1 Bayes’ Theorem
Bayes’ theorem provides a mathematical framework for performing inference and reasoning from a probability point of view . In particular, Bayes’ theorem describes the posterior probability of an event, based on the prior knowledge that is related to the event . Mathematically, Bayes’ theorem can be stated as
where and are events and . The term is the likelihood of event occurring given that is true which is called the posterior probability of event . The term is the likelihood of event occurring given that is true, and and are the probabilities of observing and independently of each other, respectively. Here, usually refers to the prior probability of event , which reflects the original knowledge of before having any knowledge of , and can be regarded as a normalizing parameter, due to its independence of . The four basic terms constitute the Bayes’ theorem, which will be exploited for the design of CS algorithms.
2.1.2 Compressed Sensing Algorithms
Compressed sensing (CS, also known as compressive sensing or compressive sampling), is an extensively developed signal processing technique for efficiently reconstructing a signal from under-sampled measurements [40, 36, 75]. For the conventional signal reconstruction, Nyquist sampling theorem provides a lower bound of the sampling rate to completely recover a signal . On the other hand, in large amount of practical applications, e.g. the imaging and video processing, the reconstructed signals are sparse and the CS is able to provide accurate recovery of high-dimensional signals from a much smaller number of sampling measurements. It implies that exploiting the sparsity of signals can indeed reduce the number of sampling measurements and ensure an exact recovery of a signal. In view of this, the CS algorithms, in particular efficient sparse signal reconstruction algorithms, have drawn much attention from the academic society [36, 77].
The sparse signal reconstruction can be expressed as the recovery of a sparse signal from linear combinations of its elements, given by
where is the measurement vector, , and is the measurement matrix. When the signal vector has or fewer non-zero elements such that , the vector is said to be -sparse.
One of the theoretically best approaches to recover such a signal vector from the measurement vector is to solve the -minimization problem :
Here, is the -pseudo norm of a vector, which counts the number of non-zero elements in the input vector. The sufficient and necessary condition of existing a unique solution in Eq. (2.1.2) is that the measurement matrix has a rank larger than . A simple proof is as follows. Assume that and are both solutions of Eq. (2.1.2). Due to and , we have . Since is -sparse and has the rank larger than , we can obtain . In fact, the -minimization problem is generally intractable. In particular, this problem has been proved as a NP-complete problem  and finding its optimal solution relies on the combinatorial search.
Fortunately, several numerically feasible suboptimal alternatives [36, 81, 82, 83, 84, 85, 75] to this NP-complete problem have been developed as the pioneering work on CS algorithms in the past few years. Among them, the cornerstone algorithms include -norm minimization (also called Basis Pursuit (BP) algorithm) [36, 81, 82], greedy algorithm , statistical sparse recovery technique , and iterative algorithm [85, 75].
Starting from the -norm minimization, we will briefly introduce all the cornerstone algorithms in the following. Note that, the presentations of all algorithms are based on the model in Eq. (2.1.2).
The -norm minimization (BP) was proposed by [36, 81, 82], which relaxes the reconstruction of -minimization problem. In particular, the non-convex -norm is replaced by a convex -norm and the problem can be reformulated as
By using the standard linear programming  to solve Eq. (2.1.2), the signal can be successfully reconstructed [36, 81]. In addition, when the measurement is corrupted by the noise, the system model is written as
where is the additive measurement noise. Then, the -minimization problem can be formulated as
where is a pre-determined noise level of the system. This type of problem, called basis pursuit denoising (BPDN), has been well studied in the convex optimization field [82, 86] and can be solved by many effective approaches, e.g. the interior-point method. When the noise information is not known, the Lagrangian unconstrained form is exploited to obtain an alternative problem formulation as 
This is known as the least absolute shrinkage and selection operator (LASSO) problem . The fixed regularization parameter is used to control the sparsity level of the solution, via tuning the weight between the least squared error term and the sparsity term. Since the solution of Eq. (2.1.2) is sensitive to the value of , the least angle regression stage wise (LARS) algorithm  is used to simultaneously optimize the parameter and find the solution of Eq. (2.1.2).
While the -minimization (BP) algorithm can effectively reconstruct the sparse signal vector via the linear programming technique, it requires substantial computational cost, in particular for large-scale applications. For example, the solver based on the interior point method has an associated computational complexity order of , where is the signal dimension. Such a computational cost is burdensome for some real-time systems, e.g. wireless communication systems.
Faced with this, the greedy algorithm was proposed and drew much attention, due to its lower computational overhead than the BP algorithm . In the greedy algorithm, the subset of signal support, i.e., the index set of non-zero entries, is iteratively updated until a good estimation is obtained. When the support is accurately estimated, the underdetermined system can be converted into the overdetermined one by removing columns of measurement matrix corresponding to zero elements. Then, the elements of support can be estimated by using conventional estimation techniques, e.g. least squares (LS) estimator. The most popular greedy algorithm is the orthogonal matching pursuit (OMP) . It iteratively updates the estimate of signal support by choosing the column of measurement matrix that has the largest correlation with the residual. Consider the model , where each column of is normalized and the sparsity of signal , i.e., , is known. The OMP algorithm starts from the initial estimation and the residual . The support set of the initial estimate is . In the -th iteration, for the column that has largest correlation with the residual is chosen, i.e.,
and its index is added into the support set . Then, the estimate and the residual of this iteration are updated via
The iteration continues until the size of estimated support set reaches . Based on the OMP algorithm, where only one column is selected in each iteration, many variants of OMP are proposed, e.g. generalized OMP (gOMP) , compressive sampling matching pursuit (CoSaMP) , subspace pursuit (SP) , and multipath matching pursuit (MMP) . For these variants, multiple promising columns are selected in each iteration and the support set is then refined by adding the indices of selected columns, which can outperform the OMP algorithm at the cost of a higher computational complexity.
Statistical Sparse Recovery:
For the model in Eq. (2.1.2), the signal vector can be treated as a random vector and inferred by using the Bayesian framework in statistical sparse recovery algorithms. For example, in the maximum-a-posterior (MAP) approach, an estimate of can be expressed as
where is the prior distribution of signal . In order to model the sparsity of the signal vector , is designed in such a way that it decreases with increasing the magnitude of . Well-known examples include independent and identically distributed (i.i.d.) Gaussian and Laplacian distribution. In addition, the other widely used statistical sparse recovery algorithm is sparse Bayesian learning (SBL) . In the SBL, the prior distribution of signal vector is modeled as zero-mean Gaussian with the variance parameterized by a hyper-parameter. Then, the hyper-parameter and the signal vector are estimated simultaneously. It is noteworthy that the hyper-parameter can control the sparsity and the distribution of signal vector . With approximately choosing the hyper-parameter, the SBL algorithm can outperform the -minimization algorithm .
Iterative Thresholding Algorithm:
For iterative thresholding algorithms, the signal vector is estimated in an iterative way, which particularly include iterative hard thresholding (IHT) algorithm , iterative soft thresholding (IST) algorithm , and approximate message passing (AMP) algorithm . Based on the model , the three algorithms will be briefly presented in the next. For the IHT, it can be expressed by
where , called the hard thresholding function, is a non-linear operator that sets all but the largest (in magnitude) elements of input vector to zero and is the known sparsity of estimated signal vector . In the -th iteration, the estimate of is denoted as . Intuitively, the algorithm makes progress by moving in the direction of the gradient of and then promotes sparsity by applying the hard thresholding function .
The IST algorithm is another iterative thresholding algorithm, which uses a soft thresholding function instead of a hard thresholding function. Similarly to the hard thresholding function, the soft thresholding function has input and output of vectors and it operates in an element-wise way. Then, the soft thresholding function associated with the -th element of input vector is given by
where is the -th element of input vector , is a threshold control parameter, and equals itself if and equals zero otherwise. Based on the soft thresholding function, the IST algorithm proceeds with the iteration
Since the soft thresholding function is proved to be the proximity operator of the -norm , the IST algorithm with a determined threshold control parameter can be equivalent to the -minimization problem. For the family of iterative thresholding algorithms, including the IHT algorithm and the IST algorithm, as only the multiplication of a vector by a measurement matrix is required in each iteration, the computational complexity is very small and the storage requirement is low, particularly compared to the -minimization algorithm and the greedy algorithm. Then, the iterative thresholding algorithms are efficient for large-scale systems. However, these algorithms fall short of the sparsity-undersampling tradeoff, compared to that from the -minimization algorithm. Therefore, from the theory of belief propagation in graphical models, the AMP algorithm is proposed to achieve a satisfactory sparsity-undersampling tradeoff that can match the theoretical tradeoff for -minimization algorithm . Besides, the AMP algorithm poses a much lower computational cost than the -minimization algorithm. As the AMP algorithm acts as a building block for the work in Chapter 3, the algorithm and its analysis will be briefly presented in the next part.
2.1.3 Approximate Message Passing Algorithm
The AMP algorithm was first proposed in  and further developed by exploiting the distribution of unknown vector as a prior information in [96, 98]. The gist of AMP algorithm is that it exploits an iterative refining process to recover the sparse unknown vector via using the Gaussian approximation during message passing. It enjoys a dramatically low computational complexity while achieving the identical performance with linear programming in terms of the sparsity-undersampling tradeoff . Based on the system model with , , and , the AMP algorithm proceeds with
where is the index of iteration, vector is the estimate of in the -th iteration, is the residual of received signal corresponding to the estimate , with being a designed denoiser function for the -th element of input vector, is the first-order derivative of , and is the average of all entries of the input vector. Note that, the third term in the right hand side of Eq. (2.1.3) is the correction term, which is known as the “Onsager term” from the statistical physics . From the point and , the AMP algorithm starts to proceed. It can be seen from Eq. (2.1.3) that by exploiting the measurement matrix , a matched filter is first performed on the residual to obtain the variable . The denoiser function receives the vector as the input and outputs the estimate in the -th iteration. Here, the denoiser function input can be modeled  as
where each entry of follows the standard Gaussian distribution due to the correction term, and denotes a state variable that will be analyzed in the following. Eq. (2.1.3) is used to compute the residual corresponding to the estimate in the -th iteration, and then the AMP algorithm proceeds to the next iteration.
For the iterative process in the AMP algorithm, a state variable, denoted by , , and its evolution are introduced to characterize the performance of AMP in each iteration . In particular, for a large-scale system, where the measurement length , the length of estimated signal vector , and the sparsity of estimated signal vector are infinite but with fixed ratios and , the state evolution is given by 
where is the state variable in the -th iteration. The equality in Eq. (2.0) is obtained from the Gaussian approximation in Eq. (2.1.3) in the -th iteration. The expectation in Eq. (2.0) is taken over the random vectors and . The state evolution begins with 
It can be observed from Eq. (2.0) that the squared state variable characterizes the mean squared error (MSE) of each entry of the estimate in the -th iteration. It implies that from iteration to iteration, the evolution of the performance of AMP algorithm in terms of MSE, can be tracked by exploiting the state variable . Note that, the state variable is also involved in the AMP algorithm through the Gaussian approximation of in Eq. (2.1.3). However, obtaining via the state evolution in Eq. (2.0) requires a high computational complexity. Therefore, in the literature , an empirical estimate of , i.e.,
is usually adopted during the implementation of AMP algorithm.
2.2 Modern Coding Techniques
Some modern coding techniques are presented in this section, which will provide the theoretical tool for our work in Chapter 4.
2.2.1 Distance Metrics and Channel Codes
During the data transmission, original transmitted signals are likely to be corrupted by the channel and the noise at the receiver. This results in the received signal with errors, which affects the reliability of reconstructing the original data from the received signal. In order to deal with this problem, error control coding (ECC) is developed . In the ECC, some redundant bits are added to the transmitted data, so that the received errors can be corrected and the original data can be retrieved. Using an ECC can help achieve the same bit error rate (BER) at a lower signal-to-noise ratio (SNR) in a coded system than in a comparable uncoded system . The reduction of the required SNR to achieve the same BER is called the coding gain. For an ECC in digital systems with hard-decision decoding, its error detection and correction capacity can be determined by the Hamming distance of this ECC. Other the hand, for digital systems with soft-decision decoding, the error performance of an ECC is guided by its Euclidean distance. Therefore, we first introduce two commonly used distance metrics in the coding theory and the error detection and correction capacity of codes in the following.
There are two important distance metrics widely used for the channel coding . One is the Hamming distance, which is defined as the number of different bits between two codewords. The other is Euclidean distance, which refers to the straight-line distance between two points in Euclidean space. For a linear block code, the Hamming distance can indicate its error detection and correction capability, particularly when the source emits binary strings over a binary channel. On the other hand, if the source emits the codewords in over a Gaussian channel and the soft decoder is exploited at the receiver, Euclidean distance determines the code error performance. The minimum Hamming (or Euclidean) distance of a set of codes is given by
where denotes the Hamming (or Euclidean) distance of the two codewords . For a code set with the dimension and the length , its minimum Hamming distance should satisfy
which is the Singleton bound .
Error Detection and Correction:
In the coding theory, the error detection and correction are a key enabler for the reliable delivery of digital data over unreliable communication channels, where communication channels are subject to channel noise and errors can be introduced during the transmission . In particular, the error detection technique allows detecting errors, and the error correction enables the reconstruction of original data. For practical communication systems, the ECC is an efficient way to perform the error detection and correction. For an ECC in digital systems with hard-decision decoding, its error detection and correction capability is determined by its minimum Hamming distance. In particular, the ECC with the minimum Hamming distance , it can detect up to error bits and correct up to error bits .
With the knowledge on distance metrics and the error detection and correction capacity of codes, we briefly introduce some widely used channel codes. Generally, there are two structurally different types of channel codes for the error control in communication and storage systems, i.e., block codes and convolutional codes. Block codes can be further divided into two categories, i.e., linear and nonlinear block codes. Nonlinear block codes are not widely used in practical applications and have not been widely investigated . Therefore, we mainly focus on the linear block codes here.
Linear Block Codes:
A code is linear if the sum of any two codewords, i.e., for , is still a codeword in the code set . We assume that an information source is a sequence of binary symbols over Galois field of two, i.e., GF(2). In a block coding system, the information sequence is segmented into message blocks of information bits and there are distinct messages. For the channel encoder, each input message of information bits is encoded into a longer sequence of binary digits according to certain encoding rules, where and are called the dimension and length of a codeword, respectively, and they satisfy . The binary sequence is called the codeword of the message . The classical linear encoding rule can be expressed as 
where the binary matrix is called the generator matrix of dimension . A column of corresponds to an encoded bit of a codeword and a row corresponds to an information bit of the message. If the message is contained into the codeword in an unaltered way, the encoding mapping is called systematic. Since distinct information messages exist, there are distinct codewords accordingly. This set of codewords is said to form an block code set, and each codeword satisfies 
where the matrix is called the parity-check matrix. The columns of correspond to the bits of a codeword and the rows correspond to the parity check equations fulfilled by a valid codeword. It implies that if a codeword is valid in the code set, it should satisfy Eq. (2.0). The code rate is defined as , which can be interpreted as the average number of information bits carried by each code bit. For an block code, the bits added to each input message by the channel encoder are called redundant bits. These redundant bits carry no new information and their main function is to provide the code with the capability of detecting and correcting transmission errors caused by the channel noise or interferences.
Classical linear block codes include repetition codes and maximum distance separable (MDS) codes [105, 51]. The repetition code is one of the most basic linear block codes, which repeats the message several times. If the channel corrupts some repetitions, the receiver can detect the occurrence of transmission errors, according to the difference of received messages. Moreover, the receiver can recover the original message by choosing the received one that occurs most often. The implementation of a repetition code is extremely simple, while it has a very low code rate. As a result, the repetition code can be concatenated into other codes, e.g. repeat accumulate (RA) codes [106, 107, 108] and turbo like codes , to achieve an excellent error correction performance.
The MDS code is a kind of linear block codes which meet the Singleton bound. Since the error detecting and correcting capability is determined by the minimum Hamming distance, it can be seen that given the code dimension and code length , the MDS code has the largest minimum Hamming distance and thus the largest error detecting and correcting capacity. According to the Singleton bound in Eq. (2.0), the MDS code has the minimum Hamming distance that is equal to . Then, it can detect up to error bits. It implies that as long as any bits in an MDS codeword are correctly received, this codeword can be successfully decoded.
Convolutional codes, introduced by , refer to codes in which the encoder maps streams of data into more streams of data. These codes are highly structured to allow a simple implementation and a good performance with the short block length. The encoding is realized by sending the input streams over linear filters. An example of a convolutional code with rate is shown in Fig. 2.1.
The information bits are fed into the linear encoder circuit and this circuit outputs the corresponding codeword. This code construction sets additional constraints on the characteristics of the corresponding matrices and . Note that, the filtering operation can be expressed as a convolution, which leads to the name, i.e., convolutional codes. The most popular decoding algorithm for convolutional codes is the Viterbi algorithm , which is an efficient implementation of the optimal maximum likelihood decoder. The gist of the Viterbi algorithm is the sequential computation of the metric and the tracking of survivor paths in the code trellis. This algorithm was extended in  for generating soft-outputs, called soft-output Viterbi algorithm (SOVA) algorithm. Alternatively, one can also use the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm as proposed in  to generate the soft-outputs for the iterative decoding.
Based on these classical channel codes, several modern codes have been developed over the past years, which include the polar codes [113, 114, 115], turbo codes [116, 117, 118], and low-density parity check (LDPC) codes [119, 120, 121]. Polar codes are a class of linear block codes, whose encoding construction is based on a multiple recursive concatenation of a short kernel code to transform the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high or low reliability (i.e., they polarize), and the data bits are allocated to the most reliable channels. For the turbo codes, their encoding is a concatenation of two (or more) convolutional encoders separated by interleavers, and its decoding consists of two (or more) soft-in/soft-out convolutional decoders, which iteratively feed probabilistic information back and forth to each other. LDPC codes are another class of linear block codes on the graph , which is constructed by using a sparse Tanner graph  and can provide the near-capacity performance with implementable message-passing decoders. Since we will exploit some key techniques of codes on the graph, e.g. LDPC codes, for the work in Chapter 4, we present more information on LDPC codes in the next.
2.2.2 Codes on Graph and Tanner Graph
The LDPC code is one important class of linear block codes with reasonably low complexity and implementable decoders [122, 123, 124], which can provide near-capacity performance on a large set of data transmissions and are proposed as the standard code for 5G. An LDPC code can be given by the null space of an parity-check matrix that has a low density. A regular LDPC code is a linear code whose parity-check matrix has constant column weight and row weight , where and . If is a low density matrix but with variable and , the code is called an irregular LDPC code . It is noteworthy that the density of LDPC codes needs to be sufficiently low to permit an effective iterative decoding, which is the key innovation behind the invention of LDPC codes. Moreover, LDPC decoding is verifiable in the sense that decoding to a correct codeword is a detectable event.
A graphical representation of an LDPC code, which is called a Tanner graph, can provide a complete representation of the code and aid in the description of its decoding algorithm . A Tanner graph is a bipartite graph, that is, a graph whose nodes can be divided into two disjoint and independent sets, with edges connecting only nodes of different sets. The two sets of nodes in a Tanner graph are called the variable nodes (VNs) and the check nodes (CNs). For a code with the parity-check matrix of dimension , its Tanner graph can be drawn as follows: CN is connected to VN when . Then, according to this rule, there are CNs and VNs in the Tanner graph, which correspond to check equations and code bits, respectively, as shown in Fig. 2.2.
The number of edges connected to a VN (or a CN) is called the degree of this VN (or CN). Denote the number of VNs of degree and the number of CNs of degree as and , respectively. Since the edge counts must match up, we have . It is convenient to introduce the following compact notation 
where and are the maximum degrees of the VN and the CN, respectively. Here, and are the polynomial representations of the VN degree distribution and the CN degree distribution from a node perspective, respectively. Moreover, the polynomials and are non-negative expansions around zero whose integral coefficients are equal to the number of nodes of various degrees. For the asymptotic analysis, it is more convenient to introduce the VN and CN degree distributions from an edge perspective, given by 
Note that, and are also polynomials with non-negative expansions around zero. In addition, is equal to the fraction of edges that connect to VNs of degree (CNs of degree ). In other words, is the probability that an edge chosen uniformly at random from the graph is connected to a VN of degree (a CN of degree ).
2.2.3 Iterative Decoding Algorithm on Code Graph
Iterative decoding is a generic term to refer to decoding algorithms that proceed in iterations [117, 121, 115]. An important subclass of iterative algorithms are message-passing algorithms, which obey the rule that an outgoing message along an edge only depends on the incoming messages along all edges other than this edge itself . When the messages are probabilities, or called “belief”, the algorithm is known as sum-product algorithm (SPA)  and also called the belief propagation algorithm (BPA) . In the SPA, the passing probability refers to the log-likelihood ratio (LLR) of a bit, given by 
where and are a posteriori probability (APP) that given the received codeword , the bit equals and , respectively. Denote the passing messages from VN to CN and from CN to VN as and , respectively, which are given by 
where denotes the indices of all variable nodes except , is the vector of variable nodes neighbouring to the check node , and is the joint mass function of all elements in . It can be seen that when the VN computes the information transmitted to CN , i.e., , the multiplication of LLR information from all its neighboring CNs except the recipient CN is obtained. Similarly, when computing the information from CN to VN , i.e., , the joint mass function of all elements in is multiplied by the LLR information from all its neighboring VNs except the recipient VN . Since the information transmitted to a certain CN or VN does not contain the information from itself, the transmitted information is called the extrinsic information .
At each iteration, all VNs process their inputs and pass extrinsic information up to their neighboring CNs. All CNs then process their inputs and pass extrinsic information down to their neighboring VNs. The procedure repeats, starting from the variable nodes. After a preset maximum number of repetitions (or iterations) of this VN/CN decoding round, or after some stopping criterion has been met, the decoder estimates the LLRs from which decisions on the bits are made. When the code graph has no cycles or the lengths of cycles are large, the estimates will be very accurate and the decoder will have near-optimal MAP performance. It is noteworthy the development of SPA relies on an assumption that the LLR quantities received at each node from its neighbors are independent. However, this assumption difficultly holds when the number of iterations exceeds half of the Tanner graph’s girth .
2.2.4 Density Evolution and EXIT Chart
For an iterative decoder with a finite message alphabet, the distribution of messages passed along edges in each iteration can be expressed by a system of coupled recursive functions. The procedure of using the system of coupled recursive functions to track the evolution of message distributions is termed as density evolution (DE) . The DE provides an analytical performance tracking of the iterative decoder in each iteration, which allows to design the code structure to improve the decoding performance. For example, the performance of an irregular LDPC code can be improved via the design of its optimal and near-optimal degree distribution. And, the design of degree distributions can be obtained using the DE. Moreover, one is interested in the error probability of messages that are passed along edges and its evolution as a function of the iteration number. Based on that, the decoding threshold, which is defined as the lowest SNR to ensure that the decoder error probability can asymptotically converge to zero for an infinite number of iterations, can be predicted through the DE. In the following, we adopt the binary erasure channel (BEC) to illustrate the derivation of DE. Consider the degree distributions of variable nodes and check nodes for an LDPC code from an edge perspective as and , which are given in Eq. (2.0). Let be the probability that a packet is erased, . From the definition of SPA, the initial VN-to-CN message is equal to the received message, which is an erasure message with the probability . The CN-to-VN message that is emitted by a CN of degree is an erasure message, if any one of the incoming messages is an erasure. Denote as the probability that an incoming message is an erasure in the -th iteration. The probability that the outgoing message is an erasure is equal to , where all incoming messages are independent. As an edge has a probability to be connected to a CN of degree , the erasure probability of a CN-to-VN message in the -th iteration is equal to 
Consider an edge is connected to a VN of degree . For the VN-to-CN message along this edge in the -th iteration, it is an erasure if the received value of the associated VN is an erasure and all incoming messages are erasures. This happens with probability . By averaging this probability over the edge degree distribution , we obtain 
Eq. (2.0) is an recursive function of the probability . Its evolution characterizes the decoder performance of the LDPC code in each iteration.
As an alternative to DE, the extrinsic-information-transfer (EXIT) chart technique, which is introduced by , provides a graphical tool for estimating the convergence behaviour of an iterative decoder. The basic idea behind EXIT chart is based on the fact that the VN processors and CN processors work cooperatively and iteratively to make each bit decision in the iterative decoder, with the metric of interest improving with each half-iteration . For both the VN processors and the CN processors, the relations of the input metric versus the output metric can be obtained by using the transfer curves, and the output metric of one processor is the input metric of its companion processor. Therefore, both transfer curves can be plotted on the same axes, but with the abscissa and ordinate reserved for one processor, which generates the EXIT chart. Furthermore, the EXIT chart provides a graphical method to predict the decoding threshold of the ensemble of codes that are characterized by given VN and CN degree distributions. In particular, when the VN processor transfer curve just touches the CN processor transfer curve, the SNR is obtained as the decoding threshold.
2.3 Physical-layer Network Coding
Physical-layer network coding (PNC) was firstly proposed for a two-way relay channel by  in 2006, which exploits the network coding operation [129, 130, 131, 132] in the superimposed electromagnetic (EM) waves to embrace the interference. It has been shown to be able to boost the throughput and significantly improve the reliability in a multi-way relay channel [133, 134, 135, 136, 137, 138]. The basic idea of PNC can be summarized as follows, where the two-way relay channel is considered for simplicity. Consider the two-way relay channel, where users A and B want to exchange packets via a relay node as shown in Fig. 2.4.
Each round of packet exchange consists of two equal-duration time slots, which are called the uplink phase and the downlink phase, respectively. In the uplink phase, two users transmit their own signal simultaneously to the relay, where the signals are denoted by and , respectively. In the downlink phase, the relay broadcasts a signal , which can be represented as a function of the two signals, i.e.,
Upon receiving the signal , each user extracts the other user’s information by exploiting its own signal, which finishes the round of packet exchange. Obviously, the function , called the PNC mapping function , acts as a pivotal role for the system performance. Since the mapping function refers to the mapping from a superimposed EM signal to a desired network-coded signal, it should be designed according to the employed modulation constellation. The original work on PNC suggests using a XOR function of the two users’ packets [68, 69] for the two-way relay channel with binary phase-shift keying (BPSK) and quadrature phase-shift keying (QPSK) modulations, which is a well-known function for PNC operations. In fact, can be a linear function [139, 140] or a non-linear function [133, 141]. For the linear mapping function, is the linear combination of the two users’ packets in . It offers low computational complexity and scalability, compared to the non-linear mapping function. In , a linear PNC scheme is proposed for real Rayleigh fading two-way relay channels with pulse amplitude modulation (PAM). Furthermore, the linear PNC is extended to complex Rayleigh fading two-way relay channels in , where a design criterion of linear PNC, namely minimum set-distance maximization, is proposed to achieve the optimal error performance at high SNRs.
In addition, the PNC technique can be adopted to multiple access networks [71, 72, 143]. A cross-layer scheme design is proposed to improve the throughput of wireless network in [71, 72]. In particular, the PNC decoding and the MUD are jointly used to obtain multi-reception results at the physical layer, and the multi-reception results are exploited at the MAC layer to recover users’ packets.
2.4 Introduction to Wireless Communicaiton
This section provides a brief overview of wireless communications. The presentation is not intended to be exhaustive and does not provide new results, but it is intended to provide the necessary background to understand Chapters 3-5.
2.4.1 Channel Models
A wireless channel is one of the essential elements in the wireless transmission system. By sufficiently understanding the wireless channel, we can mathematically model its physical properties to facilitate the design of communication systems .
Wireless channels operate through electromagnetic radiation from the transmitter to the receiver . The transmitted signal propagates through the physical medium which contains obstacles and surfaces for reflection. This causes multiple reflected signals of the same source to arrive at the receiver in different time slots. In order to model these effects from the physical medium, the concept of channel model is proposed. The effect of multiple wavefronts is represented as multiple paths of a channel. The fluctuation in the envelope of a transmitted radio signal is represented as the channel fading . The process to estimate some information about the channel refers to the channel estimation, which is essential to recover the transmitted signal at the receiver.
In principle, with the transmitted signal, one could solve the electromagnetic field equations to find the electromagnetic field impinging on the receiver antenna . However, this task is non-trivial in practice, since it requires to know the physical properties of obstructions in a more accurate manner. Instead, a mathematical model of the physical channel is used, which is simpler and tractable. In the following, two mathematical models of channel are presented, which are widely used in the communication system designs and in this thesis.
As mentioned above, the strength change of transmitted signals through the channel is represented by the channel fading. In particular, the channel fading can be divided into two types , i.e., the large-scale fading and the small-scale fading. The large-scale fading mainly refers to the path loss, which is a function of distance and shadowing by large objects, e.g. building and hills. In this case, the variation of signal strength is over distances of the order of cell sizes. The small-scale fading is caused by the constructive and destructive interference of the multiple signal paths between the transmitter and the receiver. This occurs at the spatial scale of the order of the carrier wavelength.
The Rayleigh fading model is the simplest model for wireless channels . It is based on the assumption that there are a large number of statistically independent reflected and scattered paths with random amplitudes in the delay window corresponding to a single tap of the tapped delay line model. Each tap gain is the sum of many independent random variables. According to the Central Limit Theorem, the net effect can be modeled as a zero-mean complex Gaussian random variable, given by
where is the variance of tap , , and is the number of taps. The magnitude of the -th tap is a Rayleigh random variable with density
and the squared magnitude is exponentially distributed with density
The Rayleigh fading model is quite reasonable for the scattering mechanisms, where many small reflectors and no line of sight exist. The other widely used fading model is the Rician fading model, in which the line of sight path is dominating.
Although the transmitted and received signals are continuous-valued for most of the channels, many crucial processes of practical communication systems, e.g. the coding/decoding and modulation/demodulation, are based on the discrete signals in nature. Therefore, for the performance or channel capacity analysis, we usually model channels with the discrete input/output, called discrete memoryless channels (DMCs) . One important DMC is the BEC, which is shown in Fig. 2.5.
It can be seen from the figure that the BEC has binary input and ternary output. The input symbols cannot be flipped but can be erased with a probability . This BEC channel is widely used in the information theory, since it is one of simplest channels to analyze. In addition, the packet erasure channel (PEC) is proposed as a generalization of the BEC. For the PEC, the transmitted packets are either received or lost. It is noteworthy that the erasure of PEC can be seen as the result of deep-fading in the practical fading channel. Then, the PEC can be seen as a simplified version of the fading channel, and it is commonly used for the system design .
2.4.2 Multiple Access Techniques
Multiple Access System:
The idea of using a communication channel to enable several transmitters to send information data simultaneously starts from Thomas A. Edison’s 1873 invention of the diplex . For this revolutionary system, two telegraphic packets are simultaneously transmitted in the same direction through the same wire, which is the embryonic form of multiple access systems.
Nowadays, multiple access systems have been intensively developed and widely used in many areas, e.g. multiple cellular users transmitting to a base station and local area networks. A common feature of those communication systems  is that multiple transmitters simultaneously send signals to a common receiver, and the transmitted signals are superimposed at the receiver, as depicted in Fig. 2.6.
The multiple access communication lies at the heart of wireless communication systems. The first-generation (1G) to the forth-generation (4G) of cellular networks have adopted radically different multiple access schemes, which include time division multiple access (TDMA), frequency division multiple access (FDMA), code division multiple access (CDMA), and orthogonal frequency division multiple access (OFDMA) . In addition, space division multiple access (SDMA) [147, 54] is also used in other practical systems. The common gist of these schemes is to allocate orthogonal resources for different transmitters to send their packets. Therefore, all these schemes belong to the orthogonal multiple access (OMA). For example, in FDMA that has widely been used in the 1G mobile wireless communication network, the whole bandwidth is divided into several non-overlapping frequency subchannels and each transmitter employs a subchannel to send its voice. In other words, one orthogonal spectral resource is allocated to only one user. As a result, the signals transmitted by different users can be easily separated and recovered. While the data detection process is simple for OMA schemes, the spectrum occupation is inefficient and cannot satisfy the requirements of high throughput, high traffic load, and low latency for current communication systems. Therefore, it triggers the proposal of non-orthogonal multiple access (NOMA) schemes [148, 149, 150, 151, 152, 153, 154, 155]. By allowing multiple users to share a same resource block, NOMA schemes can increase the spectral efficiency and the user-fairness. In particular, NOMA schemes include the power-domain NOMA and the code-domain NOMA. Power-domain NOMA [148, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167] allocates different power levels to different users and exploits the successive interference cancellation (SIC) to retrieve their packets. For the code-domain NOMA [168, 169], data streams are mapped to multi-dimensional sparse codewords, where each codeword represents a spread transmission layer. As a result, more users can share the same time-frequency resource block and the system efficiency is improved.
Random Multiple Access:
Among multiple access communications, random multiple access is one of the approaches to dynamic channel sharing for dealing with the burst traffic. In a random multiple access system, when a user has the packet to transmit, it randomly occupies a resource block for its transmission. It implies that the random multiple access system has a flexible and dynamic resource occupation. However, it always exists an inevitable probability that two users occupy the same resource block and their packets collide with each other, which refers to the collision probability. In this case, the collided packets cannot be reliably decoded by the receiver and the users need to retransmit their packets after a duration. In order to reduce the collision probability for retransmission, the collided transmitters usually wait a random period of time before retransmitting . The algorithm used by the transmitter to determine the retransmission delay plays an essential role in the random access schemes.
The first random multiple access system is the ALOHA system, which was proposed for a wireless connection between the computer resources of different islands of the state of Hawaii in . For the ALOHA, when a user is ready to transmit its packet, it simply transmits the packet and shares the channel with other users in an uncoordinated way, as shown in Fig. 2.7.
If there is only one user transmitting, the user’s packet can be successfully received over a noiseless channel and it receives an acknowledgment via the feedback channel. Otherwise, the packet collision occurs and the user cannot receive the acknowledgment. In Fig. 2.7, the collided parts of packets are highlighted by shaded areas. The starting times of packets can be modeled as a Poisson point process with parameter packets/second . If each packet lasts seconds, the normalized channel traffic can be given by
The normalized throughput is equal to 
When the traffic equals , the maximum value of this normalized throughput is obtain as .
Based on the ALOHA scheme, slotted ALOHA (SA) is proposed in . By defining a set of contiguous equal-duration time slots, the users align the start of their packet transmissions to the start of a time slot and transmit packets within the time slot, which is depicted in Fig. 2.8.
The use of synchronous transmission reduces the number of collisions and improves the throughput, compared to the ALOHA scheme. In particular, the maximum achievable throughput is doubled, i.e., . Note that, this improvement is based on the assumption that the length of transmitted packets equals the duration of a time slot.
Carrier-sense multiple access (CSMA) scheme was proposed as another important random access system in  and has been widely used in many practical wireless networks [170, 171], such as WiFi. In the CSMA, users sense the channel to see if it is occupied or not before transmitting, in order to avoid as many collisions as possible. If no occupation is sensed, the user transmits. Otherwise, it retries after a while. The sensing can be done by measuring the received power and comparing it to a predetermined threshold, which increases the hardware and software complexities. Furthermore, the CSMA with Collision Avoidance (CSMA-CA) scheme is proposed to further reduce the collision probability by wisely using the back-off. The intuition behind this is that the more collisions occur, the more congested the network is. Thus, retransmissions need to be less frequent (more spaced apart in time). It is noteworthy that compared to the ALOHA-based schemes, the CSMA scheme may suffer from a large sensing overhead. In this thesis, we mainly focus on the ALOHA-based random access scheme to develop our proposed random access schemes in Chapter 4.
For random access systems, particularly the conventional ALOHA and SA schemes, collided packets are directly discarded and the corresponding users retransmit their packets after a while. This mechanism dramatically degrades the system performance and increases the delay. Then, instead of directly discarding collided packets, exploiting efficient data detection techniques to recover collided packets is very essential to improve the system performance and decrease the delay . In the following, we overview two classical data detection methods.
In general wireless networks, the data detection can be divided into the single-user detection and the MUD. For the single-user detection, a signal is detected by treating other collided signals as the interference and each signal is detected separately. While the single-user detection has a low complexity, its performance is not good, in particular when the number of collided signals is very large or the power difference of collided signals is significant. Then, the MUD was proposed and summarized by , where collided signals are detected simultaneously.
We consider a multiple access system, where single-antenna users simultaneously transmit data to a common single-antenna receiver and each user’s data is one symbol from a finite alphabet set . The received signal is the sum of the received signals for all users plus noise, given by
where is the channels from users to the receiver, is the transmitted data from users, and is AWGN. Based on the model in Eq. (2.0), the optimal detector of MUD, i.e., the maximum likelihood (ML) detector , is given by
It can be seen from Eq. (2.0) that given the channel , the ML detector chooses the signal that maximizes the probability of received signal as the detected signal. When the prior distribution of detected signal , i.e., , is exploited, the MAP detection [174, 175] can be obtained, given by
We ignore the marginal likelihood , since it does not depend on and has no effect on the optimization. Comparing Eq. (2.0) with (2.0), it can be observed that the MAP detection is equivalent to the ML detection, when the prior distribution of detected signal i.e., , is uniform.
For both the ML detector and the MAP detector, they can jointly detect all collided signals at a time. Although the joint detection methods can provide excellent performance, they suffer from a high computational complexity. Therefore, many suboptimal approaches are proposed and the SIC is a popular one among them. Instead of jointly detecting all collided signals, the SIC algorithm iteratively detects the collided signals. In particular, when a collided signal is successively detected, it will be removed from the superimposed signal and the next signal is detected from the remaining superimposed signal. Due to removing the previously successfully detected signals, the following signals can be detected with less interference and a higher successful probability for the SIC algorithm. Moreover, in order to further improve the performance of SIC algorithm, an optimal detection ordering is proposed, that is the stronger signal is detected earlier .
2.5 Chapter Summary
In this chapter, we have provided the background knowledge on CS techniques, modern coding techniques, and the PNC technique, which are essential to understand the works given in Chapters 3-5, respectively. Besides, we have presented a brief overview of fundamental concepts of wireless communications, which include the widely used channel models, multiple access systems, random multiple access systems, and the data detection techniques.
Chapter 3 Joint User Activity Identification and Channel Estimation in Random Access Systems for mMTC
In this work, we focus on the user activity identification and channel estimation in random access systems for mMTC. In particular, we first propose a transmission control scheme to enhance the system sparsity and to achieve an improved performance for the user activity identification and channel estimation, especially when a small pilot length and a high reliability on the user identification are required. By employing a step function for the proposed transmission control scheme, we then design an MMSE denoiser and modify the AMP algorithm to jointly identify the active users and estimate their channels. Additionally, we derive the false alarm probability and the missed detection probability to characterize the user identification performance for the proposed scheme. We also analyze the packet delay and the network throughput. Based on the analysis, we optimize the transmission control threshold of the step function to maximize the network throughput. The main contributions of this work are summarized below:
1. We propose a simple transmission control scheme at the transmitter to improve the performance of joint user identification and channel estimation (JUICE). In the scheme, each user decides to transmit a packet or postpone its transmission based on a transmission control function of its instantaneous local CSI, when there is a transmission demand. We design the function, so that the users with better channel gains have a higher probability to transmit their packets, and vice versa. This effectively postpones the transmissions of users with small channel gains and therefore enhances the sparsity of user activity.
2. We modify an AMP algorithm to jointly identify the users’ activity and estimate their channels at the receiver, for the system with a step transmission control function. The channel distribution experienced by the receiver is first derived based on the adopted step transmission control function. Then, we design an MMSE denoiser to modify the AMP algorithm at the receiver.
3. We derive the false alarm and the missed detection probabilities of user identification for the proposed scheme, by using the state evolution . Based on the user identification performance, we obtain closed-form expressions of the average packet delay and the network throughput. Moreover, we optimize the transmission control function to maximize the network throughput.
4. We verify that our analytical results match well with simulation results. We demonstrate that compared to the conventional scheme without transmission control in , the proposed scheme can significantly improve the system performance for mMTC, in terms of the missed detection probability and the normalized mean squared error (NMSE) of channel estimation. In addition, we show that the average packet delay is reduced and the network throughput is enhanced for the proposed scheme.
3.2 System Model
Consider potential users who may transmit packets to a receiver through a common channel. Both the receiver and users are equipped with a single antenna111Note that, it is the first work to propose the transmission control scheme for the joint user identification and channel estimation with compressed sensing. Therefore, we use a simple case where the base station has a single antenna, to facilitate the presentation and highlight insights of the proposed scheme for practical implementations. For the case where the base station has multiple antennas [52, 53], we will consider it in the further work.. We assume that in a time slot each user has a transmission demand with a probability , , in an i.i.d. manner. When user , , has a transmission demand, it transmits its packet with an average transmission probability , which is determined by our proposed transmission control scheme. In this work, we define an active user as a user who has a transmission demand and transmits its packet to the receiver. Let indicate the activity of user . In particular, if , user is active. Otherwise, user is inactive. As a result, we have and , where is called the active probability of user . The set of active users is given by
and the number of active users is .
If user is active, it will transmit a packet, which includes an -length pilot sequence and the information data, to the receiver within a time slot. Otherwise, it will keep silent. Then, in a specific time slot, the received signal during pilot transmission is written as
where denotes the channel fading coefficient from user to the receiver in the time slot222We assume that the channel fading coefficient remains constant during a time slot in this work. and captures the joint effect of user activity and channel fading of user . Vector