Dealing with Limited Backhaul Capacity in Millimeter Wave Systems: A Deep Reinforcement Learning Approach

Dealing with Limited Backhaul Capacity in Millimeter Wave Systems: A Deep Reinforcement Learning Approach

Mingjie Feng,  and Shiwen Mao,  M. Feng and S. Mao are with the Department of Electrical and Computer Engineering, Auburn University, Auburn, AL 36849-5201 USA. Email: mzf0022@auburn.edu, smao@ieee.org.
Abstract

Millimeter Wave (MmWave) communication is one of the key technology of fifth generation (5G) wireless systems to achieve the expected 1000x data rate. With large bandwidth at mmWave band, the link capacity between users and base stations (BS) can be much higher compared to sub-6GHz wireless systems. Meanwhile, due to the high cost of infrastructure upgrade, it would be difficult for operators to drastically enhance the capacity of backhaul links between mmWave BSs and the core network. As a result, the data rate provided by backhaul may not be sufficient to support all mmWave links, the backhaul connection becomes the new bottleneck that limits the system performance. On the other hand, as mmWave channels are subject to random blockage, the data rates of mmWave users significantly vary over time. With limited backhaul capacity and highly dynamic data rates of users, how to allocate backhaul resource to each user remains a challenge for mmWave systems. In this article, we present a deep reinforcement learning (DRL) approach to address this challenge. By learning the blockage pattern, the system dynamics can be captured and predicted, resulting in efficient utilization of backhaul resource. We begin with a discussion on DRL and its application in wireless systems. We then investigate the problem backhaul resource allocation and present the DRL based solution. Finally, we discuss open problems for future research and conclude this article.

I Introduction

With the explosion of smart devices and data-intensive wireless applications, the demand for high data rate services has drastically increased in recent years. To meet such demand, the fifth generation (5G) cellular network is under intensive research from both industry and academia. According to a recent report, the 5G networks are expected to support massive connections with minimum data rate of 100 Mbps and peak data rate higher than 10 Gbps [1]. To achieve this goal, several technologies are considered as candidates for 5G systems, including millimeter-wave (mmWave) communication, massive MIMO, and small cell. By operating at mmWave band with large bandwidth, an mmWave system can significantly enhance the data rate performance to multi-Gbps level.

As the data rates of links between an mmWave base station (BS) and users are greatly enhanced, the capacity of backhaul link between the BS and the core network becomes relatively limited, posting a new challenge to mmWave cellular networks. Compared to a long term evolution (LTE) system with typical cell throughput less than 150 Mbps [2], the cell through of an mmWave system can be greater than 1.5 Gbps [3], which is comparable to the data rate of a current backhaul link. As a result, the backhaul links in mmWave cellular networks are expected to achieve much higher data rates compared to current cellular networks. In current LTE networks, the configuration of a backhaul link is to support peak cell throughput. However, this may not be feasible in mmWave networks. Due to cost concern, it is highly unlikely for operators to upgrade existing infrastructure to drastically enhance capacity of wired backhauls. In case of wireless backahul, e.g., mmWave-based wireless backhaul or free space optical, although the cost can be reduced, the challenge brought by limited backhaul capacity remains. One the one hand, the capacity of wireless backhaul link is shared by multiple BS-user links. On the other hand, the backhaul links are likely experience higher propagation loss than the BS-user links.

The tension caused by limited backhaul capacity may be aggravated in the future as the data rate of mmWave links is expected to keep increasing. For example, high resolution virtual reality (VR) requires data rate on the order of 1 Gbps and latency of 1 ms. Based on a prediction in [1], the 5G mmWave networks need to support 50 Gbps data rate by the 2024. In addition, due to the expected dense deployment of mmWave BSs [4], a large number of backhaul connections, which can be wired or wireless, would be coexist. As a result, the achievable data rate of each backhaul link would be limited, which may be caused by resource sharing, mutual interference, potential congestion, or increased overhead [5]. Therefore, unlike traditional cellular networks (from 1G to 4G) in which the wireless transmission between BS and user is the bottleneck, the backhaul becomes a potential bottleneck in mmWave system. Although some field tests were performed to demonstrate the potential of mmWave cellular systems such as in [3], these tests are not based on actually cellular networks. Thus, the impact of limited backhaul capacity has not been tested and verified, which requires further investigation. The challenge of possible bottleneck at backhaul has been observed in the context of ultra-dense small cell deployment [4], in which the large number of small cells put pressure on the backhaul links. Compared to the case of network desification, the bottleneck challenge in an mmWave system is caused by the significantly increased data rate of mmWave transmissions.

On the other hand, due to the short wavelength of mmWave communication, the transmissions between BS and users are subject to random blockage. As a result, the data rate of each user is highly dynamic. In contrast, the data rate of a backhaul link is much more stable since it is implement by wired connection or line of sight (LOS) wireless connection. Therefore, the BS-user link is characterized by high data rate and unstable connection, while the backhaul link is characterized by relatively limited data rate and stable connection, as shown in Fig. 1. To balance such mismatch and enhance the system performance, efficient backhaul resource allocation to each user is necessary. For example, when a user switches from LOS transmission to non line of sight (NLOS) or outage, less resource shoud be allocated to this user. However, such adaptive control cannot be implemented by traditional resource allocation schemes, due to the varying system dynamics. To perform efficient scheduling, a BS needs to predict possible blockage and estimate the data rate of each user based on current channel state information (CSI). Then, it makes decision on the backhaul resource allocation and sends a request to the core network. This way, the backhaul scheduling can be performed in a timely manner that captures the blockage pattern.

Fig. 1: System model of a mmWave system with limited backhaul capacity.

Deep reinforcement learning (DRL) is a new paradigm for intelligent decision-making [6], which can be implement by TensorFlow and Keras. Combing reinforcement learning and deep neural network, a DRL agent interacts with the environment and learns the pattern of a Markov Decision Process (MDP) through training experience. Specifically, a DRL agent employs a deep neural network to approximate the Q-values, where the Q-values are defined by discounted cumulative rewards that can be obtained by taking different actions under certain system states. Then, the agent makes optimal decisions based on the estimated Q-values. Compared to other machine learning approaches, DRL is model-free and does not require data samples from an external supervisor. Due to these benefits, the application of DRL is wireless networks has drawn growing attention recently. In this article, we apply DRL to deal with the challenge of limited backhaul capacity in mmWave networks. By learning the blockage pattern based on the CSI of mmWave users, a BS decides the resource allocation of backhaul link with the objective of maximizing the sum utility of all users.

In the remainder of this article, we first introduce the background of DRL and review its recent applications in wireless systems. Then, we present a DRL based approach for backhaul resource allocation. Finally, we discuss open research problems and conclude this article.

Ii Deep Reinforcement Learning for Wireless Systems

Ii-a Preliminaries of Deep Reinforcement Learning

A reinforcement learning (RL) agent aims to learn from the environment and take action to maximize the long term cumulative reward. The environment is modeled an MDP with state space and a RL agent can take actions from space . The agent interacts with the environment by taking actions, observing the reward and system state transition, and updating its knowledge about the environment. The objective of a RL algorithm is to find the optimal policy, which determines the strategy of taking actions under certain system states. A policy is specified by . In general, a policy is in a stochastic form to enable exploration over different actions. To find the optimal policy, the key component is to determine the value of each state-action function, also known as Q-function, which is defined by

(1)

where is the instant reward that can be obtained by taking action under state ; is the transition probability from state to state under action ; is the discount factor used to balance the long-term and short-term rewards; is the cumulative reward from time , given by . In (II-A), is the state-value function which indicates the expected reward if the system is in state and follow policy , given by . With Q-functions, an MDP is solved when the optimal policy is found, i.e., . A common RL technique for solving an MDP is Q-learning, which uses an empirical iterative approach to update the values of Q-functions (Q-values). In particular, an agent interacts with the environment by taking actions and obtaining reward, and then update the Q-values by

(2)

RL has been applied in decision-making problems of mmWave networks such as in [7]. However, in large scale systems with large numbers of states and actions, traditional Q-learning approach becomes infeasible since a table is required to store all the Q-values. In addition, traditional Q-learning needs to visit and evaluate every state-action pair, resulting in huge complexity and slow convergence. An effective approach to deal with such challenge is use a neural network (NN) to approximate the Q-values, given by , where are the weights of the NN. By training a NN with sampled data, the NN can map the inputs of state-action pairs to their corresponding Q-values. However, a direct application of NN in Q-learning may be unstable or even diverge due to the correlations between training samples and the correlations between Q-values and target values [6].

To reduce such correlations, a DRL approach was proposed in [6], in which a deep neural network (DNN) is used to approximate the Q-value, yielding a deep Q-network (DQN). In the DRL approach presented in [6], the agent first explores the environment by randomly taking actions and stores the experience, , in a target network. Then, a mechanism called experience replay is used, where the data are randomly sampled in minibatches from the target network to break the correlation in a sequence of observation. With the samples from the target network, the weights of the DQN is updated by minimizing the loss function given by

(3)

where and are the weights of DQN and target network at iteration , respectively. The loss function (II-A) is the mean square error between DQN and target network, which can be minimized through stochastic gradient descent. To reduce the correlation between DQN and target network, the target network is updated less frequently. After the training of DQN, the agent then takes action based on the estimated Q-values. The general framework of the DRL approach in [6] is shown in Fig. 2.

Fig. 2: Framework of the DRL approach in [6]

Ii-B Applications in Wireless Networks

In the design of wireless networks, a major challenge is to solve the formulated combinatorial problems. While exhaustive search is infeasible due to prohibitive complexity, existing solutions typically rely on network information exchange, which yields a tradeoff between overhead and performance. For DRL approaches, the network optimization is based on trial and error processes, which does not require explicit or instantaneous network information. In particular, a DRL algorithm is model-free, which does not require explicit knowledge on the inter-dependent patterns of different nodes. In addition, with extensive offline training, a DRL agent is able to predict the system dynamics, which enables timely scheduling. Thus, compared to traditional approaches, DRL-base schemes have the potential to achieve better performance with reduced online overhead.

Application State Action Reward Learning Objective
[8] Cache based Channel power gain User selection for Network throughput Channel dynamics&
interference alignment interference alignment cache availability
[9] Multi-channel Channel state: Channel selection Number of Channel availability
access good/bad of each user successful transmissions
[10] Resource management Current channel Channel access Total throughput Channel access patterns
in LTE-Unlicensed usage pattern probability on selected channels of other users
[11] Handover control in Signal qualities from BS selection Weighed sum of data Prediction for channel qualities
ultra-dense network different BSs rate & handover energy from different BSs
[12] Traffic allocation Throughput & delay Traffic split ratio Total utility (weighted Learn traffic pattern
in multi-hop network of each session sum of throughput & delay) from experience
[13] Multi-channel Channel access Channel access strategy Number of Probabilities of success
random access of other users successful transmissions transmission over multi-channel
TABLE I: Applications of DRL in Different Wireless Networks

Due to such promising prospect, DRL algorithms have been recently considered in several wireless networks to perform intelligent decision making [8, 9, 10, 11, 12, 13]. In [8], DRL is used to estimate the availability of cache and select proper set of users for interference alignment. In [9, 13, 10], the problem of multi-channel access is considered in which each user observes the channel dynamics from history and estimates the possible actions of other users, then determines its channel access strategy. In [11], DRL is used to predict the QoS that can obtained when handover to another BS, resulting in efficient handover process. In [12], continuous actions and states are considered so that DQN-based DRL cannot be applied. The deep deterministic policy gradient (DDPG), which is based on actor-critic framework, was employed to address continuous space control problem. The general idea is to parameterize the Q-functions and derive the optimal values of parameters through policy gradient. In [10, 11, 13], the problems are formulated as multi-agent control with interactions among agents. As a result, experience replay for a single agent cannot be applied in such scenarios. To take the inter-agent impact into account, the long short term memory (LSTM) approach is used to generate target values. The key aspects of system models in recent works are summarized in Table I.

Iii DRL Based Backhaul Resource Allocation

Iii-a System Model

We consider an mmWave BS serving user equipments (UE) indexed by . Each UE has three link states, LOS, NLOS, and outage, which are denoted by three binary 0-1 variables, , . Specifically, , , and indicate that user is under LOS, NLOS, and outage state at time , respectively. The link state of each user follows a Markov process with steady probabilities given in [3]. We assume that the BS can estimate the values of through the statistics of user signals. The BS can also measure the achievable data rate of mmWave link for user , , via uplink signal.

We assume the backhaul resource is divided into orthogonal blocks, , each block can be a period of time or a range of wavelength. The capacity of each block is , then the total backhaul capacity is . Let be the number of blocks allocated to user , the backhaul capacity allocated to user is . Then, the actual data rate of user is .

Iii-B DRL Framework

The proposed DRL-based approach employs a DQN to find the resource allocation strategy under different system states. The key component of system state is the achievable data rate of each UE, . We also set the link state of each UE, , as part of the system states, since it impacts the future data rates. Then, the system state is used as input of the DQN. The action taken by the agent indicates the backhaul capacity allocation, i.e., the number of blocks allocated to each user . The action space is consisted of all feasible resource allocation, which includes multiple combinations of integers that satisfy , and we index the actions by . To achieve a good system performance as well as guarantee the fairness among users, we define the utility of each user to be a concave function of its data rate. Then, the system reward is set as the sum of utilities of all users. The architecture of the DQN is shown in Fig. 3. The input layer includes the link state and achievable data rate information of all UEs. The output layer presents the approximated Q-values and there are several hidden layers between input and output layers. To match the capacity of a backhaul resource block, we define and use it at the input layer of the DQN. indicates the numbers of resource blocks needed to satisfy the data rate requirements of UE .

The training procedure of the DQN is the same as the one in [6], which uses experience replay to reduce the correlation between training samples, as shown in Fig. 2. With the DQN, the agent at the BS first observes the current system state, i.e., the values of and of all users. Then, it obtains the Q-values of taking different actions, i.e., selecting different resource allocation strategies. With the Q-values, the agent takes an action according to the -greedy approach, which selects the action with maximum Q-value with probability and randomly selects an action with probability .

Fig. 3: Architecture of the DQN for backhaul resource allocation.

Iii-C Illustrative Example

We evaluate the performance of the DRL based approach with simulations. We consider an mmWave cell with a coverage radius of 100 m, users are randomly distributed in the cell. Let be the distance between a user and the BS, The probabilities of a user in different link states are functions of . The probabilities under outage, LOS, and NLOS are , , and , respectively [3], which are the steady probabilities of the Markov process of link state . We employ the channel model of 73 GHZ band in [3], where the NLOS links experience higher path loss than the LOS links. The system bandwidth is 1 GHz, the transmission powers of BS and UEs are 30 dBm and 20 dBm, respectively. The backhaul capacity is 10 Gbps, the backhaul resource is divided into 20 resource blocks. There are two hidden layers in the DQN and we use ReLu as the activation function. We consider two DRL-based schemes, namely DRL-1 and DRL-2, with reward functions given as and , respectively. With logarithmic utility function, DRL-1 scheme achieves proportion fairness. Compared to DRL-1, DRL-2 is more efficiency-prone with worse fairness. Two benchmark schemes are considered for comparison, a myopic scheme and the equal allocation scheme. For the myopic scheme, the backhaul resource allocation is based on current data rates of mmWave links, without considering the future change of link states.

Fig. 4 shows the sum rate performance under different number of users. As the number of users increases, the sum rates of all schemes grow at reduced rates, showing that the system performance is limited by the backhaul capacity. The proposed DRL-based schemes outperforms other ones and the performance gap is enlarged when the number of users increases. This is because the BS is able to predict the variation of link state and allocate the resource based on long-term consideration. Then, the backhaul resource can be efficiently utilized, and such advantage becomes significant when the number of users is large. Compared to DRL-1 scheme, DRL-2 achieves higher data rate since its utility and reward functions are set to prioritize efficiency over fairness.

Fig. 4: Sun rate performance of different schemes versus the number of users.

The performance under different values of blockage coefficient is shown in Fig. 5. The blockage coefficient is defined in [3], which indicates the likelihood that a user experience blockage. Given the same BS-UE distance, a scenario with larger has higher blockage probability compared to a scenario with lower . From Fig. 5, we can see that when is small, the performance of the myopic scheme is close to the proposed DRL-based schemes, since the ratio of of users under blockage is small and the data rates of mmWave links are relatively stable. However, when the number of users increases, the performance gap between the proposed schemes and the myopic scheme is increased, showing that DRL-based schedule is effective in capturing the system dynamics and making intelligent decisions from the perspective of long-term benefit.

Fig. 5: Performance of different schemes versus blockage coefficient .

Iv Open Problems and Future Research

Iv-a Joint Optimization of Backhaul and MmWave Link

The DRL based backhaul resource allocation presented in Section III is based on given achievable rate of each user. To mitigate the pressure caused by limited backhaul capacity, the design of BS-user links can also be considered. The design of resource allocation in LTE systems with limited backhaul capacity has been studied in [14]. In mmWave systems, the data rate of each mmWave link can be adjusted through precoding design. Considering the channel characteristics of different users, a joint consideration of backhaul resource allocation and precoding can provide a better solution to balance the tension between limited backhaul and increased mmWave data rate demand.

Iv-B Dynamic Backhaul Capacity

In our model, we assume fixed capacity for backhaul, which corresponds to the case of wired backhaul or LOS mmWave backhaul with highly stable data rate. However, in a practical system with wireless backhaul, the data rate of backhaul would vary over time. Thus, it is necessary for the agent to learn such dynamics as well, and more sophisticated design is required based on the proposed framework.

Iv-C Multi-Cell Scenario

Iv-C1 Capacity Allocation Among Different Backhauls

The design in Section III is based on a single cell scenario. From the perspective of multi-cell, the capacity allocated to each backhaul can be optimized to further enhance the system performance. For example, an mmWave BS with heavy traffic and high aggregated data rate requirement can share more capacity from the core network. However, the load balancing and capacity allocation require coordination between different BSs and efficient design is required. In addition, how to address the scalability issue would be another challenge. Capacity allocation among different backhauls for load balancing has been investigated in other wireless networks, such as in heterogeneous cloud radio access networks [15]. Due to the dynamic nature of mmWave communications, the varying capacity requirement of each backhaul need to be learned to enable effective scheduling.

Iv-C2 Adaptive User Association

To mitigate the pressure of limited backhaul, an effective approach is to perform load balancing. For a BS with large deficit in backhaul capacity, part of the users severed by the BS can handover to neighboring BSs to reduce the traffic demand on this BS. Thus, traffic-aware user association is another design factor that can considered for better system performance.

Iv-D Heterogeneous Network

In a heterogeneous network, the traffic of small cells is transmitted to a macrocell via backhaul connections and then forwarded to the core network via the backhaul of the macrocell. Then, the backhaul resource allocation becomes a two-tier problem, which requires more complicated design. In addition, similar to the multi-cell case, the capacity allocation for different small cell backhaul links and adaptive user association are important design issues that should be jointly considered with backhaul resource allocation.

Iv-E Caching Assisted System

BS caching, e.g., femtocaching, was recently proposed as an effective approach to enhance the data rate of users. By downloading popular contents in advance and storing at local BSs, the files requested by users are directly transmitted from local BS. While the primary goal of caching is to increase the capacity of BS-user links and reduce delay, it is also a good solution to the limited backhaul capacity challenge. When the traffic load of an mmWave BS is low, it can request popular files from the core network. When the traffic load is increased, the popular files at the BS can be used to satisfy the demand of some users. As a result, the backhaul capacitiy is mainly used to satisfy the instantaneous demands from users, thus mitigating the traffic burden at the backhaul. Under the caching architecture, the key design issue is the selection of popular contents. With limited storage, it is necessary to learn the patterns of users preference and blockage. For example, when a users is under frequent blockage, caching and storing the content of this user would lead to under-utilization. However, if the content requested by the user is also frequently requested by other users, the utilization of would be improved. Thus, the agent needs to learn multiple patterns to derive an efficient caching strategy.

Iv-F Performance-Complexity Tradeoff

In the system model of Section III, we assume the backhaul resource is divided into blocks. To improve resource utilization and enhance the system performance, a larger value of is desirable. However, this results in increased dimensions of both action and state spaces. Thus, an adaptive selection of that achieves a good tradeoff between complexity and performance is another design issue.

V Conclusion

In this article, we aim to address the challenge of limited backhaul capacity in mmWave networks with a DRL based approach. We first overview the background of DRL and its applications in wireless networks. Then, we present a DRL based approach to enable efficient backhaul resource allocation, and show the effectiveness through an illustrative example. We then discuss the future research problems and conclude this article.

Acknowledgment

This work was supported in part by the NSF under Grant CNS-1702957 and by the Wireless Engineering Research and Engineering Center at Auburn University.

References

  • [1] A. Ghosh, “5G mmWave revolution and new radio,” [online] Available: https://5g.ieee.org/images/files/pdf/5GmmWave_Webinar_IEEE_Nokia_09_20_2017_final.pdf.
  • [2] P. Croy, “LTE backhaul requirements: a reality check,” White Paper, [online] Available: www.portals.aviatnetworks.com/exLink.asp?9826636OQ63H29I38061128.
  • [3] M.R. Akdeniz, Y. Liu, M.K. Samimi, S. Sun, S. Rangan, T.S. Rappaport, and E. Erkip, “Millimeter wave channel modeling and cellular capacity evaluation,” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1164–1179, June 2014.
  • [4] X. Ge, S. Tu, G. Mao, C.-X. Wang, and T. Han, “5G ultra-dense cellular networks,” IEEE Wireless Commun. Mag., vol. 23, no. 1, pp. 72–79, Feb. 2016.
  • [5] M. Feng, S. Mao, and T. Jiang, “Joint frame design, resource allocation and user association for massive MIMO heterogeneous networks with wireless backhaul,” IEEE Trans. Wireless Commun., vol. 17, no. 3, pp. 1937–1950, Mar. 2018.
  • [6] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015.
  • [7] M. Mezzavilla, S. Goyal, S. Panwar, S. Rangan, and M. Zorzi, “An MDP model for optimal handover decisions in mmWave cellular networks,” in Proc. IEEE EuCNC’16, Athens, Greece, June 2016, pp. 100–105.
  • [8] Y. He, Z. Zhang, F.R. Yu, N. Zhao, H. Yin, V.C.M. Leung, and Y. Zhang, “Deep-reinforcement-learning-based optimization for cache-enabled opportunistic interference alignment wireless networks,” IEEE Trans. Veh. Technol., vol. 66, no. 11, pp. 10433–10445, Nov. 2017.
  • [9] S. Wang, H. Liu, P.H. Gomes, and B. Krishnamachari, “Deep reinforcement learning for dynamic multichannel access in wireless networks,” IEEE Trans. Cognitive Commun. and Netw., vol. 4, no. 2, pp. 257–265, June 2018.
  • [10] U. Challita, L. Dong, and W. Saad, “Proactive resource management for LTE in unlicensed spectrum: A deep learning perspective,” IEEE Trans. Wireless Commun., vol. 17, no. 7, pp. 4674–4689, July 2018.
  • [11] Z. Wang, L. Li, Y. Xu, H. Tian, and S. Cui, “Handover control in wireless systems via asynchronous multi-user deep reinforcement Learning,” IEEE Internet of Things, DOI: 10.1109/JIOT.2018.2848295.
  • [12] Z. Xu, et al., “Experience-driven networking: A deep reinforcement learning based approach,” in Proc. IEEE INFOCOM’18, Honolulu, HI, Apr. 2018.
  • [13] O. Naparstek and K. Cohen, “Deep multi-user reinforcement learning for dynamic spectrum access in multichannel wireless networks,” in Proc. IEEE GLOBECOM’17, Singapore, Dec. 2017.
  • [14] D.W.K. Ng, E.S. Lo, and R. Schober, “Energy-efficient resource allocation in multi-cell OFDMA systems with limited backhaul capacity,” IEEE Trans. Wireless Commun., vol. 11, no. 10, pp. 3618–3631, Oct. 2012.
  • [15] C. Ran, S. Wang, and C. Wang, “Balancing backhaul load in heterogeneous cloud radio access networks,” IEEE Wireless Commun. Mag., vol. 22, no. 3, pp. 42–48, June 2015.

Mingjie Feng [S’15] received his Ph.D. degree in Electrical and Computer Engineering from Auburn University, Auburn, AL, USA, in 2018. He received his Bachelor’s and Master’s degrees from School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China, in 2010 and 2013, respectively. He is currently a postdoctoral research associate in the Department of Electrical and Computer Engineering at the University of Arizona. In 2013, he was a visiting student in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology. His research interests include mmWave communication, massive MIMO, cognitive radio networks, heterogeneous networks, and full-duplex communication. He is a recipient of a Woltosz Fellowship at Auburn University.

Shiwen Mao [S’99-M’04-SM’09-F’19] received his Ph.D. in ECE from Polytechnic University, Brooklyn, NY in 2004. He is the Samuel Ginn Distinguished Professor and Director of the Wireless Engineering Research and Education Center at Auburn University, Auburn, AL. His research interests include wireless networks and multimedia communications. He is a Distinguished Speaker of IEEE Vehicular Technology Society. He received the 2017 IEEE ComSoc ITC Outstanding Service Award, the 2015 IEEE ComSoc TC-CSR Distinguished Service Award, the 2013 IEEE ComSoc MMTC Outstanding Leadership Award, and the NSF CAREER Award in 2010. He is a co-recipient of IEEE ComSoc MMTC Best Conference Paper Award in 2018, Best Demo Award from IEEE SECON 2017, Best Paper Awards from IEEE GLOBECOM 2016 & 2015, IEEE WCNC 2015, and IEEE ICC 2013, and 2004 IEEE Communications Society Leonard G. Abraham Prize in the Field of Communications Systems. He is an IEEE Fellow.s

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
329709
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description