Learning-Based Resource Allocation Scheme for TDD-Based CRAN System
Explosive growth in the use of smart wireless devices has necessitated the provision of higher data rates and always-on connectivity, which are the main motivators for designing the fifth generation (5G) systems. To achieve higher system efficiency, massive antenna deployment with tight coordination is one potential strategy for designing 5G systems, but has two types of associated system overhead. First is the synchronization overhead, which can be reduced by implementing a cloud radio access network (CRAN)-based architecture design, that separates the baseband processing and radio access functionality to achieve better system synchronization. Second is the overhead for acquiring channel state information (CSI) of the users present in the system, which, however, increases tremendously when instantaneous CSI is used to serve high-mobility users. To serve a large number of users, a CRAN system with a dense deployment of remote radio heads (RRHs) is considered, such that each user has a line-of-sight (LOS) link with the corresponding RRH. Since, the trajectory of movement for high-mobility users is predictable, therefore, fairly accurate position estimates for those users can be obtained, and can be used for resource allocation to serve the considered users. The resource allocation is dependent upon various correlated system parameters, and these correlations can be learned using well-known machine learning algorithms. This paper proposes a novel learning-based resource allocation scheme for time division duplex (TDD) based 5G CRAN systems with dense RRH deployment, by using only the users’ position estimates for resource allocation, thus avoiding the need for CSI acquisition. Also, an overhead model based on the proposed frame structure for 5G TDD is presented, both for user’s position and its CSI acquisition. The proposed scheme achieves about 86% of the optimal system performance, with an overhead of 2.4%, compared to the traditional CSI-based resource allocation scheme which has an overhead of about 19%. The proposed scheme is also fairly robust to changes in the propagation environment with a maximum performance loss of 5% when either the scatterers’ density or the shadowing effect varies. Avoiding the need for CSI acquisition reduces the overall system overhead significantly, while still achieving near-optimal system performance, and thus, better system efficiency is achieved at reduced cost.
Learning-Based Resource Allocation Scheme for TDD-Based CRAN System
|KTH Royal Institute of Technology,|
|School of Electrical Engineering,|
|KTH Royal Institute of Technology,|
|School of Electrical Engineering,|
|M. Mahboob Ur Rahman KTH Royal Institute of Technology,|
|School of Electrical Engineering,|
|Radio Network Technology Research,|
|KTH Royal Institute of Technology,|
|School of Electrical Engineering,|
5G, CRAN, TDD, resource allocation, machine learning
Increased usage of smart electronic devices, such as hand-held mobile sets, tablets and laptops, in the recent years, has resulted in increased demand for higher data rates. Furthermore, the users of such devices demand full-time access to data packet connection, irrespective of their location and surrounding environment. Therefore, future communication systems are expected to have greater system efficiency and better provision of data service to the users compared to existing fourth generation (4G) technology [?]. In the last few years, extensive research has been on going for the development and standardization of the fifth generation (5G) systems, that are expected to fulfil all the aforementioned requirements. Specifically, 5G systems will be able to provide a 1000 increase in the system capacity [?], as well as almost 10 decrease in latency [?], compared to Long Term Evolution-Advanced (LTE-A) systems. Moreover, they will be able to provide high system efficiency and always-on connectivity, specially to high mobility users, in Ultra-Dense Network (UDN) deployments [?].
To achieve these goals for 5G, one possible approach is to massively increase the number of antennas (either centrally or de-centrally) [?]. Research from the last few years indicates that significant performance gain can be obtained from massive antenna deployment, if transmission from such antennas is tightly coordinated [?], [?]. This tight coordination includes phase-level synchronization, which is needed for joint transmission, as well as the synchronization needed for coordinated pre-coding. Using tight synchronization between these large number of antennas leads to a coordination overhead, as discussed in [?]. To overcome this problem, the cloud radio access network (CRAN) architecture has been introduced, which is a centralized, cloud-computing based network architecture [?]. In CRAN, the baseband units (the main signal processing units of the network) are connected to the cloud to form a pool of centralized processors, which is then connected to the set of distributed antennas (the radio access units) in the system. Thus, separating the baseband units from the radio access units helps in achieving synchronized coordination between large sets of antennas, at a relatively reduced cost in the system. However, besides the synchronization overhead, the overhead for acquiring channel state information (CSI) of the mobile users still exists, which increases with the number of antennas, the granularity of the CSI to be acquired as well as the mobility of the terminal users. For achieving the aforementioned system requirements of 5G, the cost of acquiring CSI has to be minimized, which is the main topic addressed in this paper.
The main purpose of CSI acquisition is to perform allocation of resources such that all users can be served well. The resources include time and/or frequency resources, coding rates, modulation schemes, transmit beamforming, and many more. Much work has been done in the past few years for designing efficient resource allocation schemes, specific to certain 5G system characteristics. A non-orthogonal resource allocation scheme, called non-orthogonal multiple access (NOMA) [?], has been investigated in [?], for increased system throughput and accommodating maximum users by sharing time and frequency resources. The technique of dynamic time domain duplexing for centralized and decentralized resource allocation in 5G has been studied in [?]. In [?], a radio resource allocation scheme for multi-beam operating systems has been proposed, where the radio resources are allocated to a user based on its channel state and the resources within the beam serving that user. The authors in [?] propose a resource block (RB) allocation algorithm, which exploits the combination of multi-user diversity and users’ CSI for allocation of RBs, with carrier aggregation, and modulation and coding scheme (MCS) for throughput maximization in 5G LTE-A network.
Some of these resource allocation schemes exploit the users’ CSI, which incurs a significant system overhead for high mobility users, but this overhead is not considered in those studies. On the other hand, the system’s performance is affected if outdated CSI is used for resource allocation for high mobility users [?]. One of the network deployment architectures suited for achieving the expected targets of a 5G system is the ultra-dense small cell deployment, in which the users are expected to be in line-of-sight (LOS) with the serving base stations at almost all times. In this case, the users’ position information can be used instead of their CSI [?]. Essentially, the optimal allocation of resources is dependent upon the system parameters (including users’ position, users’ velocity, propagation environment, interference in the system, and so on), which are inherently correlated. One way of exploiting these hidden correlations among system parameters for efficient allocation of resources is through machine learning, which is the method proposed in this paper. Previously, various machine learning algorithms have been used for resource allocation in different domains of wireless communication systems; some examples include using support vector machines (SVMs) for power control in CDMA systems [?], prediction of the next cell of a mobile user using supervised learning techniques and CSI [?], rate adaptation using random forests (a form of supervised machine learning technique) in vehicular networks [?], and many more. Use of machine learning has also been investigated for orthogonal frequency division multiplex-multiple input, multiple output (OFDM-MIMO) based 5G systems [?], [?]. However, for time division duplex (TDD) MIMO systems, the resource allocation is done based on instantaneous CSI availability (without using learning, or considering the CSI acquistion overhead), where resource allocation is referred to RB assignment [?], rate allocation [?] and beamforming for joint transmission-based coordinated multipoint (CoMP) [?].
This paper discusses the use of machine learning for designing a novel learning-based resource allocation scheme for TDD multi-user MIMO (MU-MIMO) CRAN systems based on the position estimates of high-mobility users, without using instantaneous CSI. For this purpose, ‘random forest’ algorithm is used, and resources including transmit beam, receive filter and packet sizes are assigned to the intended users based on their position estimates (which can be accurate or inaccurate). The robustness of the proposed resource allocation scheme is tested by using different values in training and test datasets for random forest, such as using accurate position estimates for training the random forest and testing its performance using data having inaccurate position estimates of the users. Afterwards, the system goodput is computed for the proposed resource allocation scheme and is compared to the system goodput when instantaneous CSI of users (with a system overhead) is used for resource allocation. The results show that the proposed scheme achieves about 86% of the system performance obtained for traditional CSI-based resource allocation scheme. Furthermore, a maximum performance loss of 5% is observed when either the scatterers’ density or the shadowing effect varies, thus showing the robustness of the proposed scheme to changes in the propagation environment.
To highlight the effectiveness of the proposed scheme, an overhead model based on the frame structure for 5G TDD proposed in [?] is also presented, for both the user’s position and CSI acquisition, and its effect on the system throughput is evaluated. The results show that the proposed scheme, which is based on user’s position acquisition, incurs a system overhead of only 2.4% compared to the traditional CSI acquisition-based resource allocation which has an overhead of 19%. The structure for the rest of the paper is as follows: Section Learning-Based Resource Allocation Scheme for TDD-Based CRAN System presents the system model, as well the details regarding the overhead model for 5G TDD. Details of the proposed learning-based resource allocation scheme are presented in section Learning-Based Resource Allocation Scheme for TDD-Based CRAN System, along with a brief background on machine learning algorithm ‘random forest’. Simulation results and relevant discussions are presented in section Learning-Based Resource Allocation Scheme for TDD-Based CRAN System. Section Learning-Based Resource Allocation Scheme for TDD-Based CRAN System concludes the paper.
Consider a scenario (Figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System), based on CRAN architecture, where users are being served by remote radio heads (RRHs), and all RRHs are connected to an aggregation node (AN). The AN is the computational hub where all baseband processing takes place, whereas RRHs mainly serve as radio frequency (RF) front ends. Further details of the CRAN based system model can be found in [?]. This work focuses on the downlink communication of the aforementioned 5G CRAN system model. A TDD based frame structure is considered for downlink communication, and the operational frequency of the CRAN system is . The users are assumed to be moving at high speeds ( km/h). The RRHs are densely deployed (UDN deployment), such that the users are expected to be in LOS with the serving RRHs. Also, each user is equipped with antennas, each at a height of from the ground, and will be served by an RRH having antennas, each at a height of from the ground.
The channel between the RRH and user is characterized by the spatial system parameters [such as the angle of arrival (AoA) and the angle of departure (AoD)], the frequency-based system parameters (such as operational frequency of the system, and the Doppler shift), as well as the time-dependent system parameters (such as change in user’s position, change in scatterers’ density, propagation environment, etc.). All RRHs are expected to serve at least one user, in the same time-slot, implying that all users will experience interference from other users being served by the same RRH, as well as cross-channel interference from the neighboring RRHs. Each RRH is connected to the AN, which acts as a resource allocation unit, and consists of a set of resources, including transmit beams, receive filters, and packet sizes, to serve a given user. Full-buffer condition is assumed, which means that at each time, at least one user needs to be allocated resources by the AN for being served by the associated RRH . A fixed set of transmit beams is available to serve the users, based on the geometry of the propagation scenario, and is also used to design a set of receive filters , which will be used by the terminal users for reduced-interference reception. The position coordinates of the user are available at the RRH, with some inaccuracy, and is the primary parameter used for allocation of resources by the AN connected to the RRH.
For simplifying the analysis, we consider that each RRH is serving only one user in a given time-slot, so that only cross-channel interference exists in the system. Based on all these parameters, the signal-to-interference-and-noise ratio (SINR) for a user , at time , is calculated as follows:
where, is the received signal power for a user , at time , and is given by:
Here, is the allocated transmit power, denotes the pathloss, is the azimuth AoA of user , and is its azimuth AoD. is the receive filter with the main beam focused in the direction closest to , and is the transmit beamformer with the main beam located in the direction closest to (details regarding beamforming can be found in [?]). is the channel matrix for an instance of time for a given and , and is the noise power. denotes the Hermitian of a vector or a matrix.
The SINR computed for a given combination of and , with the corresponding channel matrix , is used to compute the transport capacity for user by the following formula:
Here, is the symbol length, which is the product of the transmission time interval (TTI) and bandwidth of the system. For determining the transmission success or failure, the error model based on Shannon’s capacity (Eq. (Learning-Based Resource Allocation Scheme for TDD-Based CRAN System)) is used; if the user’s packet size then the packet is successfully transmitted, otherwise the packet transmission for user fails.
The frame structure proposed in [?] for 5G TDD based system is considered for formulating an overhead model. The total length of the frame is 0.2 ms and it consists of 42 OFDM symbols ( = 42), and about 833 sub-carriers ( = 833). The position information of the users present in the system can be acquired using narrow-band pilots (also called beacons), typically spanning the first symbol of the frame. The CSI for the users can be obtained using 4 full-band pilots, placed at the beginning of a frame just after the positioning beacons. The adjacent CSI-sensing pilots are scheduled based on the cyclic-prefix compensation distance, as explained in [?], to avoid inter-carrier interference. Based on these parameters, the overhead for position acquisition per user can be calculated as:
Here, is the number of OFDM symbols used for position estimation of user , and denotes the number of sub-carriers used in the positioning beacon. Similarly, for CSI acquisition per user, the overhead can be computed as:
where and denote the number of OFDM symbols and the number of sub-carriers, used for CSI acquisition of user , respectively. The system overhead for position, or CSI, acquisition related resource allocation scheme can be computed by multiplying the corresponding overhead with the number of users for which the position information, or CSI, is acquired.
In the considered CRAN system, the task of the AN is to allocate the resources efficiently for each RRH-user link, per TTI, such that the system’s sum-throughput is maximized. For this purpose, it needs to acquire the CSI of all users in the system, on per TTI basis, which incurs a large system overhead. The task of efficient resource allocation becomes further challenging for high-mobility users particularly, where CSI acquisition is crucial for achieving maximum sum-throughput of the system.
One way of avoiding the CSI acquisition overhead is to use the position information of the high-speed users; since LOS exists, the resource allocation for users can be done based on their position information rather than using their instantaneous CSI. However, this position information can not be used directly for efficient resource allocation, rather, the hidden correlation among the position estimates and the other system parameters has to be exploited together for this purpose. We propose to use machine learning for accomplishing this task. Specifically, we use machine learning to design a resource allocation scheme for the aforementioned system, purely based on the users’ position information, such that the CSI acquisition is not needed at all. We will investigate the performance of this learning-based resource allocation scheme in comparison to the conventional resource allocation technique, where CSI acquisition is needed, also taking into account the system overhead. Furthermore, we want to test the robustness of the learning-based resource allocation scheme, when the position information for the users in the system is inaccurate. In the next section, we discuss the details regarding the design of the learning-based resource allocation scheme, along with some background on machine learning.
Learning the different correlated parameters is accomplished using machine learning, which is defined as “the capability of a computer program or a machine to develop new knowledge or skill from the existing and non-existing data samples to optimize some performance criteria" [?]. ‘Random forest algorithm’ [?] is the learning technique used in this work for learning the system parameters, and predicting the probability of successful or failed transmission of data packets from a given RRH to the respective user(s). We first provide some background on the random forest algorithm, followed by the details about how can this algorithm be used for designing the learning-based resource allocation scheme.
Random forest algorithm is a supervised learning technique, which consists of a number of random decision trees (hence the term ‘forest’) that are built, using the statistical information of the supplied dataset, to develop a hypothesis for predicting the outcome of a future instance [?], [?]. Each instance of the dataset consists of two parts: a set of data characteristics , called features, and the relevant output variable , and collectively they form the input feature vector . To learn the information in the data features (the ‘training’ process), the random forest algorithm constructs binary random decision trees, each having a maximum depth . Each tree has one root node and several interior and leaf nodes. Figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System shows an example of a binary random decision tree, having some interior and leaf nodes. The classification features of the decision variable are taken from the input feature vector, and are represented by the root and interior nodes in the random decision tree. Each root node and interior node is constructed by a decision threshold based on a (randomly selected) feature subset from the set of given input features. Thus, each tree has a different subset of features considered for decision threshold at each of its nodes. The output variable is represented by the leaf nodes of a decision tree. The instance on which the prediction has to be made, is traversed through all decision trees in the forest, to get output variables, called votes. The output variable is predicted by aggregating all the votes and selecting the majority class (category or value of the output variable) from among those votes.
For making each tree in the training phase, a training dataset , having the same size as the training data , is constructed by using training samples which are randomly chosen, with replacement, from . This random selection with replacement makes some instances from the training data to be used repeatedly, while some are not used at all. The later instances are collectively known as out-of-the-bag (OOB) examples and represent almost 30% of the total training data [?]. A random subset of input variables is used for every node of a decision tree from the training examples. A decision threshold is determined for the selected input variable, based on which the left or right traversing path in the subsequent levels of the random decision tree is chosen. It is critical to select the input variable at a node, as well as the decision threshold, such that the purity of the subsequent levels of the random decision tree is maximized. Purity measures the extent to which the resulting child node is made up of cases having the same output variable [?]. Thus, an ideal threshold at any node would divide the data in such a way that the resulting child nodes would give distinct values of the output variable.
The generated random forest has two types of qualitative measures. First is the prediction accuracy, which measures how accurately the random forest predicts the output variable for a given dataset. If the prediction accuracy is evaluated on the training data, it is called the training accuracy, while the same when evaluated on a newly collected dataset is called the test accuracy. Second qualitative measure is the importance of an input variable, which indicates how important is a particular input variable in determining the desired output variable. In general, the random forest algorithm can cater for the missing input data variables, is robust to noisy data and is computationally efficient [?]. Also, it does not suffer from the problem of over-fitting, by using only a subset of the training data for making the random decision trees which make up the random forest. Due to all these properties, the random forest algorithm has been previously used in designing different techniques for optimal system performance. Some examples include using random forest algorithm for intrusion detection for mobile devices [?], and designing a rate adaptation scheme in vehicular networks using the random forest [?].
The main aim of the learning-based resource allocation scheme is to use only the position estimate of the users and learn its relationship with different system parameters and resources, such that the system resources are efficiently utilized without incurring excessive overhead. We first explain the structure of the learning-based resource allocation scheme, and then present its working details.
The structure of the learning-based resource allocation scheme can be divided into three parts: the pre-processing unit, the machine learning unit, and the scheduler.
The Pre-Processing Unit The pre-processing unit plays an important role in the training of the machine learning unit, by helping in designing the training dataset. The training dataset is constructed off-line, and hence the CSI as well as the position information of the users are available at the AN, along with the information for the resources to be allocated. In (off-line phase of) the pre-processing unit, the optimal transport capacity for each user is computed using its CSI (considering all the other users in the system), based on the maximization of the system’s sum-transport capacity. Then, the optimal transmit beam and receive filter combinations for a given user position are identified, for which the optimal transport capacity is obtained (i.e. the exhaustive search), and are used as the input features for the training dataset of the machine learning unit. Based on the values of the optimal transport capacity for the overall system, a set of packet sizes is designed, which consists of 5 discrete values, and the optimal transport capacity for each user is checked against those packet sizes (according to the Shannon’s capacity-based error model) to generate the output variables, 0 or 1, for the training dataset. Thus, the user’s ID , its position information , optimal transmit beam , optimal receive filter , the packet size , and the output variable (0 or 1) form the input feature vector, and a set of those input feature vectors makes up the training dataset to be used by the machine learning unit.
The Machine Learning Unit The training of the machine learning unit is done off-line, where the training dataset is used to learn the input features, i.e. the user’s ID , its position information , optimal transmit beam , optimal receive filter , and the packet size . The learning is essentially done to construct the random forest, with the parameters like number of decision trees , tree depth and number of random features for split at each tree node, chosen so as to optimize the training accuracy of the random forest. Here, it should be noted that the performance of the random forest is affected by the ‘bias’ in output variable distribution for the overall training dataset, i.e. the training accuracy is affected if, for example, a large number () of input feature vectors have class ‘0’ as output variable than class ‘1’, and vice versa. This bias in class distribution is being taken care of by the pre-processing unit, such that the training dataset has a balanced number of input feature vectors for both the classes ‘0’ and ‘1’, as the output variable. Once an optimal training accuracy is achieved, the machine learning unit is ready to be used for testing new dataset(s) generated on run-time in a realistic system.
The Scheduler In a realistic system, the scheduler is the main component responsible for forwarding the information about the allocated resources for all users in the system. This proposed scheme includes a scheduler as the last unit, whose main task is to forward the information about the allocated resources (obtained from the machine learning unit) for each user in the system, to their corresponding RRH. This scheduler is, however, sensitive to the occurrence of false-positives in the output from the machine learning unit. Technically, a false-positive occurs when an input feature vector has ‘0’ as its output variable realistically, but the learning algorithm wrongly predicts the output variable to be ‘1’ for that input feature vector. In the proposed scheme, false-positive occurrence makes the algorithm more error-prone, by suggesting a higher packet size to serve a particular user, though, realistically, the highest packet size that can serve the user is . In this case, the scheduler backs-off the packet size for transmission, and transmits a packet size, chosen randomly, from the set of packet sizes one less than , i.e. the packet size for which the false positive detection occurred. We call this a ‘random back-off scheduler’, which operates in combination with the output predicted by the random forest, and thus completes the design structure of learning-based resource allocation scheme. The false-positive occurrence is identified from the output variables available in the training dataset, and based on this, the scheduler operates more sensitively for those input feature vectors. In this way, the resource allocation scheme ensures that erroneous working of the machine learning unit does not significantly impact the system’s performance.
In a realistic system, the position estimate of the user is acquired by the corresponding RRH and reported back to the AN. This position estimate is used by the pre-processing unit, where it is compared against the user position information available in the training dataset, and the position information in the training data that gives the minimum value for is chosen to construct the input feature vector for the test dataset. Once the closest position estimate is obtained, it is combined with the corresponding optimal transmit beam , receive filter , and with the 5 discrete packet sizes to form a set of input feature vectors for different packet sizes corresponding to the position estimate .
This set of input feature vectors is then passed to the machine learning unit, where each of the input feature vector is parsed through the random forest to obtain the votes for the predicted output variable by each decision tree in the forest. In essence, the votes are obtained for successful transmission (i.e for ) of a packet size and denote the packet success rate (PSR) for . This PSR also denotes the tendency of the machine learning unit’s predicted output variable; if the PSR , then the predicted output variable is ‘1’, otherwise, it is ‘0’. This predicted output variable is tested for false-positive detection by the scheduler, by comparing it to the output variable for the corresponding input feature vector in the training dataset, which then, either retains the packet size if the prediction is correct, or chooses a random packet size in case of false-positive occurrence, to give the optimal packet size predicted for transmission by the learning-based resource allocation scheme. The PSR corresponding to is used to compute the system goodput predicted by the learning-based resource allocation scheme, as follows:
The optimal transmit beam , receive filter and packet size predicted by the learning-based resource allocation scheme is considered to belong to that set of instances for all users which achieves the maximum sum-goodput. Figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System shows the different steps of the proposed learning-based resource allocation scheme. Overall, the random forest algorithm is expected to learn the assignment of optimal packet size, transmit beam and receive filter for each user, in order to maximize the system goodput, using only the users’ position information but without knowing their CSI. In reality, the position estimates of high-mobility users can be acquired with certain precision using an extended Kalman filter (EKF), along with the direction of arrival (DoA) and time of arrival (ToA) estimates of those users [?]. This means that it is not possible to always have the accurate position information for the users in the system. Since the random forest algorithm is robust to noisy data, the learning-based resource allocation scheme is expected to perform well when noisy position estimates of the users are available for either the training or test datasets (or both). Once the proposed scheme suggests the resources , , and for serving a given user , this information is passed on the corresponding RRH , which further sends a pilot signal to the user to inform it regarding the receive filter , suggested by the proposed scheme, for reduced-interference reception. The performance of the proposed resource allocation scheme is tested by performing system-level simulations, the details of which are given in the next section, along with the results and related discussions.
In this section, we first compare the performance of the proposed scheme to that of the traditional resource allocation scheme based on user’s CSI. We also investigate the robustness of the proposed scheme when inaccurate position estimates are available in the test dataset, or when the propagation environment parameters vary in the training and the test datasets (specifically, change of scatterers’ density and change in shadowing characteristics). Afterwards, we present the effect of overhead on the proposed and the traditional schemes on the theoretical system throughput for a 5G CRAN system.
The performance evaluation of the proposed scheme in section Learning-Based Resource Allocation Scheme for TDD-Based CRAN System is done by doing realistic simulations using a discrete event simulator (DES) called Horizon [?]. The propagation scenario, shown in figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System, is implemented in Horizon for simulating a CRAN based multi-users, multi-RRHs communication system, as presented in section Learning-Based Resource Allocation Scheme for TDD-Based CRAN System. Based on the propagation scenario, a fixed set of transmit beams is designed in the following way: the transmit beams are formed using geometric beamforming, where each consecutive beam is separated by 3 angular resolution. The receive filters are, essentially, geometric beams formed by using the multiple antennas at the user end, and are designed in the same way as the transmit beams but with an angular resolution of 12. The parametrization for system simulations is given in table Learning-Based Resource Allocation Scheme for TDD-Based CRAN System. The channel coefficients for downlink communication are extracted from the simulator for each TTI, i.e. after every ms. Ray-tracing based METIS channel model [?] for Madrid grid is implemented in Horizon for generating the channel coefficients. Details about the ray-tracer based channel model can be found in [?].
Parameter Value 3.5 GHz 5 MHz 8 2 10 m 1.5 m 1 mW 1 ms 30 m/s Table \thetable: Parameter Settings for the Simulator
After computing the channel matrices, the training dataset is generated using the procedure explained in section Learning-Based Resource Allocation Scheme for TDD-Based CRAN System. As mentioned earlier, the training data is used to build random forests for various parameter settings, from which the random forest with the optimal training accuracy is chosen for further processing. The random forest is constructed using the random forest implementation in WEKA software [?]. Table Learning-Based Resource Allocation Scheme for TDD-Based CRAN System shows the values of training accuracy obtained for different parameter settings of random forest algorithm. Based on the results, the random forest with and was chosen for further processing, with the number of random features used for split at each node of decision tree as [?]. Here, it should be noted that selecting the random forest with the highest training accuracy (in our case, for and ) is not always the best choice; having a larger number of trees for a small set of input features increases the correlation among the trees (thus reducing the robustness of the random forest to noisy data), as well as increases the computation time for constructing the random forest.
Training Accuracy (%) 5 3 83.3 10 3 86 10 4 86.9 20 3 86.65 20 4 87.2 Table \thetable: Training Accuracy of Random Forest for Different Parameter Settings
A total of 100 user positions (for each user) are chosen randomly from the available set of 1000 positions (for each user) in the overall simulation scenario, for generating the training and test datasets, each having 0.25 million samples. The output from the random forest is combined with the scheduler, as explained in section Learning-Based Resource Allocation Scheme for TDD-Based CRAN System, and the system goodput (in bits/TTI) is computed. The first simulation is performed by setting the scattering objects’ density as /m, i.e., 1 scatterer per 1010 m area. The performance of learning-based resource allocation scheme is compared against the following schemes:
Random packet scheduler: Schedules a randomly selected packet size for each user using the optimal selection strategy for transmit beam and receive filter.
Random packet scheduler for geometric beam and filter assignment: Schedules a randomly selected packet size for each user using the location-based assignment of transmit beam and receive filter.
Optimal packet scheduler (the Genie): Schedules the optimal packet size for each user based on the optimal transport capacity for each user, obtained through the instantaneous CSI of the users.
Figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System shows the system goodput obtained for each of the resource allocation schemes when perfect information about each user position is available. The results are shown as the system goodput relative to the one obtained by the Genie. It can be seen that the learning-based resource allocation scheme performs very well compared to the Genie, and achieves about 86% of the optimal system performance (i.e. Genie without system overhead). The training accuracy of the random forest is 86%, where the performance loss is due to the inequitable distribution of output variables in the training dataset. The random packet scheduler performs worse, which highlights the importance of learning the system parameters for optimal resource allocation. The geometric assignment-based random scheduler also shows poor performance (only 6% goodput compared to the optimal one), due to the fact that geometric-based allocation of transmit beam and receive filter is not the optimal strategy for serving a user in an interference-limited system.
In reality, the perfect position information is not always available, rather there is some inaccuracy involved in the reported coordinates for the user’s position. Figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System shows the relative system goodput for all resource allocation schemes when the user position is having an inaccuracy variance of 0.4, 0.6 and 1.0 m. It can be seen that the position inaccuracy affects the system performance for all sub-optimal resource allocation schemes due to the fact that optimal transmit beam and receive filter combination is not valid for the inaccurate position information. Despite this, the learning-based allocation scheme achieves more than 72% of the optimal system performance (for the highest inaccuracy variance), which is 4 times better than any of the other comparison schemes. For having a fair performance comparison, we trained the random forests for each of the cases of inaccurate position availability, and tested their performance against the relative test data for inaccurate position information. The results show that no significant improvement in performance can be obtained if the learning is performed for inaccurate position information datasets; the random forest trained on accurate user position information can also operate effectively for any case of inaccurate user position information.
To observe the effect of randomness in the system parameters on the performance of different resource allocation schemes, the scatterers’ density is varied. Figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System shows the relative system goodput for learning-based resource allocation scheme for different values of scattering objects’ density when perfect user position is available. The random forest in the machine learning unit is trained for scatterers’ density of /m (the same as used for previous simulations), and is tested for datasets generated using different values of scatterers’ density. The results show that the relative system goodput is not affected severely when learning-based resource allocation scheme is used for changing scatterers’ density in the propagation environment. The maximum difference with respect to the Genie is 83% (for 10 scatterers per 100 m area), when the dataset generated for different densities of scattering objects is tested against the random forest generated using a fixed scatterers’ density. Realistically, the goodput of the system is expected to be not affected severely by the change of scatterers’ density, since LOS link exists at all times between the users and their corresponding RRHs. Keeping this into consideration, the learning-based resource allocation scheme is seen to be robust for changing scatterers’ density, where the maximum performance loss compared to the Genie varies by less than 5% as the number of scatterers per 100 m of area is increased.
Another system parameter that can vary randomly in a realistic propagation scenario is the effect of shadowing. The robustness of the proposed learning-based resource allocation scheme is checked by varying the height of the shadowing object when perfect user position information is available. The same evaluation methodology is applied, as done for the case of robustness evaluation of the proposed scheme for varying scatterers’ density. Figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System shows the performance of the proposed scheme compared to the optimal system performance when the height of shadow object is increased from 1.5 m to 5.0 m. Here, again, we observe that the performance loss does not vary significantly; maximum loss of about 5% is observed, when the shadowing effect is increased by increasing the height of the shadow object. Since LOS is existent at all times between the users and their corresponding RRHs, therefore, the channel coefficients do not vary significantly with the variation in shadowing effect, which in turn does not affect the transport capacity per user, and hence, the overall sum-goodput of the system.
After comparing the performance of the proposed learning-based scheme with the traditional CSI-based scheme for resource allocation, we now consider the effect of overhead on the overall system performance for 5G CRAN. Figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System shows the theoretical system throughput considering the parameter settings for a TDD-based 5G system. It can be seen that the learning-based resource allocation scheme, considering the simulation scenario in figure Learning-Based Resource Allocation Scheme for TDD-Based CRAN System, does not suffer from the inclusion of the system overhead, where 4 RRHs serve 1 user each, after acquiring their position information. However, the theoretical system throughput for the same scenario using the traditional CSI-based resource allocation scheme is reduced by almost 19% considering the effect of the CSI acquisition overhead. In a realistic scenario, there are more users lying close to the user served by an RRH, such that the RRH has to acquire CSI for all those users in order to optimally serve the intended user. In this case, the effect of CSI acquisition overhead further increases to about 25%. The overhead for each of the cases is computed by keeping in mind the assignment of CSI acquisition pilots based on the cyclic-prefix compensation distance, as mentioned in section Learning-Based Resource Allocation Scheme for TDD-Based CRAN System. Overall, it can be seen that the overhead for CSI acquisition increases with the number of users, thus decreasing the effective system throughput, whereas for position acquisition, the overhead will not impact the effective system throughput since only narrow-band beacons are sufficient for obtaining the position information for the users to be served by a given RRH.
This paper proposed a novel learning-based resource allocation scheme for 5G CRAN systems, which allocates the system resources based on only the position information of the users present in the system. An overhead model is also presented, for both the position information and CSI acquisition of the users, and its effect on system performance is evaluated. The operation of the proposed scheme based on usage of only the positioning beacons avoids the CSI acquisition overhead, while achieving close to optimal system performance. Overall, less than 15% loss in system goodput is observed when the proposed scheme is used for resource allocation, compared to the optimal CSI-based resource allocation scheme. However, the proposed scheme has an overhead of only 2.4% for the presented simulation scenario, compared to an overhead of about 19% for the CSI-based scheme, and thus, has a better performance in terms of effective system throughput. The proposed scheme is robust to realistic system changes as well, where the maximum performance loss of about 30% is observed for the case when the reported user’s position information has an inaccuracy variance of m. The proposed resource allocation scheme is fairly robust to the changes in the propagation environment; maximum performance loss of 5% is observed when the system parameters affecting the scattering and shadowing phenomena are different for the training and test datasets used for the machine learning unit of the learning-based resource allocation scheme. The performance loss for inaccurate position information availability can be reduced by using restricted combinations of transmit beam and receive filters (for a given user position) while training the machine learning unit of the proposed scheme, which is included in the related future work. Furthermore, the performance of the proposed scheme can be evaluated when inter-user interference is present in addition to the cross-channel interference, or for different transmit power settings, or when LOS link is not ensured at all times between the RRHs and the users, in the 5G CRAN system.
-  E. Alpaydin. Introduction to Machine Learning. MIT press, 2014.
-  A. Argyriou, D. Kosmanos, and L. Tassiulas. Joint Time-Domain Resource Partitioning, Rate Allocation, and Video Quality Adaptation in Heterogeneous Cellular Networks. IEEE Transactions on Multimedia, 17(5):736–745, May 2015.
-  E. S. Bae, J. S. Kim, and M. Y. Chung. Radio Resource Management for Multi-Beam Operating Mobile Communications. In the 20th Asia-Pacific Conference on Communication (APCC2014), pages 184–188, October 2014.
-  F. Boccardi, R. W. Heath, A. Lozano, T. L. Marzetta, and P. Popovski. Five Disruptive Technology Directions for 5G. IEEE Communications Magazine, 52(2):74–80, February 2014.
-  L. Breiman. Random Forests. Machine Learning, 45(1):5–32, 2001.
-  X. Chen, F. Meriaux, and S. Valentin. Predicting a User’s Next Cell with Supervised Learning Based on Channel States. In Signal Processing Advances in Wireless Communications (SPAWC), 2013 IEEE 14th Workshop on, pages 36–40, June 2013.
-  O. D. M. Concepts. 11g release 1 (11.1). Oracle Corp, 2007, 2005.
-  L. Dai, B. Wang, Y. Yuan, S. Han, C. l. I, and Z. Wang. Non-Orthogonal Multiple Access for 5G: Solutions, Challenges, Opportunities, and Future Research Trends. IEEE Communications Magazine, 53(9):74–81, September 2015.
-  D. Damopoulos, S. A. Menesidou, G. Kambourakis, M. Papadaki, N. Clarke, and S. Gritzalis. Evaluation of Anomaly-Based IDS for Mobile Devices Using Machine Learning Classifiers. Security and Communication Networks, 5(1):3–14, 2012.
-  D. Gesbert, S. Hanly, H. Huang, S. S. Shitz, O. Simeone, and W. Yu. Multi-Cell MIMO Cooperative Networks: A New Look at Interference. IEEE Journal on Selected Areas in Communications, 28(9):1380–1408, December 2010.
-  L. Goratti, S. Savazzi, A. Parichehreh, and U. Spagnolini. Distributed Load Balancing for Future 5G Systems On-Board High-Speed Trains. In 5G for Ubiquitous Connectivity (5GU), 2014 1st International Conference on, pages 140–145, November 2014.
-  M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. The WEKA Data Mining Software: An Update. ACM SIGKDD explorations newsletter, 11(1):10–18, 2009.
-  S. Imtiaz, G. S. Dahman, F. Rusek, and F. Tufvesson. On the Directional Reciprocity of Uplink and Downlink Channels in Frequency Division Duplex Systems. In 2014 IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC), pages 172–176, September 2014.
-  T. Jamsa et al. Deliverable D1.2 Initial Channel Models Based on Measurements. METIS project Deliverable, 2014.
-  G. N. Kamga, M. Xia, and S. AÃ¯ssa. Spectral-Efficiency Analysis of Massive MIMO Systems in Centralized and Distributed Schemes. IEEE Transactions on Communications, 64(5):1930–1941, May 2016.
-  M. K. Karakayali, G. J. Foschini, R. A. Valenzuela, and R. D. Yates. On the Maximum Common Rate Achievable in a Coordinated Network. In 2006 IEEE International Conference on Communications, volume 9, pages 4333–4338, June 2006.
-  P. Kela et al. Location Based Beamforming in 5G Ultra-Dense Networks. In Proc. Vehicular Technology Conference (VTC Fall), 2016 IEEE 84th, September 2016. accepted for publication.
-  P. Kela, X. Gelabert, J. Turkka, M. Costa, K. Heiska, K. LeppÃ¤nen, and C. Qvarfordt. Supporting mobility in 5g: A comparison between massive mimo and continuous ultra dense networks. In 2016 IEEE International Conference on Communications (ICC), pages 1–6, May 2016.
-  P. Kela, J. Turkka, and M. Costa. Borderless Mobility in 5G Outdoor Ultra-Dense Networks. Access, IEEE, 3:1462–1476, 2015.
-  G. Kunz, O. Landsiedel, S. GÃ¶tz, K. Wehrle, J. Gross, and F. Naghibi. Expanding the Event Horizon in Parallelized Network Simulations. In Modeling, Analysis & Simulation of Computer and Telecommunication Systems (MASCOTS), 2010 IEEE International Symposium on, pages 172–181. IEEE, 2010.
-  J. P. Leite, P. H. P. de Carvalho, and R. D. Vieira. A Flexible Framework Based on Reinforcement Learning for Adaptive Modulation and Coding in OFDM Wireless Systems. In 2012 IEEE Wireless Communications and Networking Conference (WCNC), pages 809–814, April 2012.
-  Q. Li, H. Niu, A. Papathanassiou, and G. Wu. 5G Network Capacity: Key Elements and Technologies. Vehicular Technology Magazine, IEEE, 9(1):71–78, March 2014.
-  A. F. Molisch. Wireless communications. John Wiley & Sons, 2007.
-  H. Z. O. Punal and J. Gross. RFRA: Random Forests Rate Adaptation for Vehicular Networks,. In Proc. of the 13th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks 2013 (WoWMoM 2013), June 2013.
-  Z. Omary and F. Mtenzi. Machine Learning Approach to Identifying the Dataset Threshold for the Performance Estimators in Supervised Learning. International Journal for Infonomics (IJI), 3:314–325, 2010.
-  M. M. U. Rahman, H. Ghauch, S. Imtiaz, and J. Gross. RRH Clustering and Transmit Precoding for Interference-Limited 5G CRAN Downlink. In 2015 IEEE Globecom Workshops (GC Wkshps), pages 1–7, December 2015.
-  A. Rico-AlvariÃ±o and R. W. Heath. Learning-Based Adaptive Transmission for Limited Feedback Multiuser MIMO-OFDM. IEEE Transactions on Wireless Communications, 13(7):3806–3820, July 2014.
-  J. Rohwer, C. Abdallah, and C. Christodoulou. Machine Learning Based CDMA Power Control. In Signals, Systems and Computers, 2004. Conference Record of the Thirty-Seventh Asilomar Conference on, volume 1, pages 207–211 Vol.1, November 2003.
-  S. Rostami, K. Arshad, and P. Rapajic. A Joint Resource Allocation and Link Adaptation Algorithm with Carrier Aggregation for 5G LTE-Advanced Network. In Telecommunications (ICT), 2015 22nd International Conference on, pages 102–106, April 2015.
-  M. Sanjabi, M. Hong, M. Razaviyayn, and Z. Q. Luo. Joint Base Station Clustering and Beamformer Design for Partial Coordinated Transmission Using Statistical Channel State Information. In 2014 IEEE 15th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pages 359–363, June 2014.
-  J.-C. Shen, J. Zhang, K.-C. Chen, and K. B. Letaief. High-Dimensional CSI Acquisition in Massive MIMO: Sparsity-Inspired Approaches. arXiv preprint arXiv:1505.00426, 2015.
-  V. Venkatasubramanian, M. Hesse, P. Marsch, and M. Maternia. On the Performance Gain of Flexible UL/DL TDD with Centralized and Decentralized Resource Allocation in Dense 5G Deployments. In 2014 IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC), pages 1840–1845, September 2014.
-  G. Wang and M. Lei. Enabling Downlink Coordinated Multi-Point Transmission in TDD Heterogeneous Network. In Vehicular Technology Conference (VTC Spring), 2013 IEEE 77th, pages 1–5, June 2013.
-  R. Wang, H. Hu, and X. Yang. Potentials and Challenges of C-RAN Supporting Multi-RATs Toward 5G Mobile Networks. IEEE Access, 2:1187–1195, 2014.
-  J. Werner, M. Costa, A. Hakkarainen, K. Leppanen, and M. Valkama. Joint User Node Positioning and Clock Offset Estimation in 5G Ultra-Dense Networks. In 2015 IEEE Global Communications Conference (GLOBECOM), pages 1–7, December 2015.