From 4G to 5G: Self-organized Network Management meets Machine Learning

From 4G to 5G: Self-organized Network Management meets Machine Learning

Jessica Moysen Department of Signal and Theory Communications
Universitat Politècnica de Catalunya-UPC
Email: jessica.moysen@tsc.upc.edu
Lorenza Giupponi Communications Network Division
Centre Tecnològic de Telecomunicacions de Catalunya-CTTC
Email: lorenza.giupponi@cttc.esthanks: The research leading to these results has received funding from the Spanish Ministry of Economy and Competitiveness under grant TEC2014-60491-R (Project 5GNORM). This work also was supported by the Spanish National Science Council and ERFD funds under Project TEC2014-60258-C2-2-R.
Abstract

In this paper, we provide an analysis of self-organized network management, with an end-to-end perspective of the network. Self-organization as applied to cellular networks is usually referred to Self-organizing Networks, and it is a key driver for improving Operations, Administration, and Maintenance (OAM) activities. SON aims at reducing the cost of installation and management of 4G and future 5G networks, by simplifying operational tasks through the capability to configure, optimize and heal itself. To satisfy 5G network management requirements, this autonomous management vision has to be extended to the end to end network. In literature and also in some instances of products available in the market, Machine Learning (ML) has been identified as the key tool to implement autonomous adaptability and take advantage of experience when making decisions. In this paper, we survey how network management can significantly benefit from ML solutions. We review and provide the basic concepts and taxonomy for SON, network management and ML. We analyse the available state of the art in the literature, standardization, and in the market. We pay special attention to 3rd Generation Partnership Project (3GPP) evolution in the area of network management and to the data that can be extracted from 3GPP networks, in order to gain knowledge and experience in how the network is working, and improve network performance in a proactive way. Finally, we go through the main challenges associated with this line of research, in both 4G and in what 5G is getting designed, while identifying new directions for research.

Network Management, Machine Learning, Self-Organizing Networks, Mobile Networks, Big Data
AAS
Active Antenna Systems
AC
Actor Critic
AI
Artificial Intelligence
ABS
Almost Blank Subframes
AIP
Administrative Incentive Pricing
ANA
Autonomic Network Architecture
ANR
Automatic Neighbour Relation
ANN
Artificial Neural Network
AP
Access Point
API
Application Programming Interface
BeFemto
Broadband Evolved FEMTO Networks
BER
Bit Error Rate
BLER
Block Error Rate
BIONETS
Genetically inspired networks
2TBN
two-stage temporal Bayesian network
BS
Base Station
CESC
Cloud-Enabled Small Cell
COGEU
Cognitive radio systems for efficient sharing of TV white spaces in EUropean context
CAPEX
Capital Expenditure
CASCADAS
Component-ware for Autonomic, Situation-aware Communications, and Dynamically Adaptable Services
CATNETS
Evaluation of the Catallaxy Paradigm for Decentralized Operation of Dynamic Application Networks
CET
Changes Electrical Tilt
CG
Coordination Game
C-SON
Centralized SON
CIO
Cell Individual Offset
CCO
Coverage and Capacity Optimization
COR
Cell Outage Recovery
CDF
Cumulative Distribution Function
CDR
Charging Data Records
COC
Cell Outage Compensation
COD
Cell Outage Detection
COM
Cell Outage Management
CoMP
Coordinated Multi Points
COOPCOM
Comunicaciones Cooperativas y Oportunistas en Redes Inalámbricas
COGNET
Cognitive networks
CRF
Conditional Random Field
CRS
Cognitive Radio System
CQI
Channel Quality Indicator
CTTC
Centre Tecnològic de Telecomunicacions de Catalunya
DA
Discriminant Analysis
D-SON
Distributed SON
DCI
Data Control Indication
DBM
Deep Boltzmann Machine
DBN
Deep Belief Network
DNN
Deep Neural Network
DL
Downlink
DP
Dynamic Programming
DSA
Dynamic Spectrum Access
DT
Decision Trees
E3
End-to-End Efficiency
eNB
Enhanced Node Base station
eNBs
Enhanced Node Base stations
EIRP
Equivalent Isotropically Radiated Power
EPC
Evolved Packet Core
ETRI
Electronics and Telecomunications Research Institute
ES
Energy Saving
E-UTRAN
Evolved Universal Terrestrial Radio Access Network
ETSI
European Telecommunications Standards Institute
FFR
Fractional Frequency Reuse
FL
Fuzzy Logic
FE
Feature Extraction
FS
Feature Selection
GT
Game Theory
Gandalf
Monitoring and Self-tuning of RRM parameters in a Multi-System Network
GLM
Generalized Linear Models
GPI
Generalized Policy Iteration
GPRS
General Packet Radio Service
GSM
Global System for Mobile Communications
3GPP
3rd Generation Partnership Project
5GPPP
5G Infrastructure Public Private Partnership
GPRS
General Packet Radio Service
GSM
General System for Mobile Communications
HAGGEL
An Innovative Paradigm for Autonomic Opportunistic Communication
HeNB
Home eNodeB
Het-Net
Heterogenous Network
HMM
Hidden Markov Model
HO
Handover
HII
High Interference Indicator
IRP
Integration Reference Point
IS
Information Service
IEEE
Institute of Electrical and Electronics Engineers
ICIC
Inter-Cell Interference Coordination
IMS
IP Multimedia Subsystem
IoT
Internet of Things
IRAT
Inter-Radio Access Technology
-NN
-Nearest Neighbors
KPI
Key Performance Indicator
LENA
LTE-EPC Network Simulator
LTE
Long Term Evolution
LTE-Advanced
Long Term Evolution Advanced
LB
Load Balancing
LTE-U
LTE-Unlicensed
LAA
Licensed Assisted Access
MAC
Media Access Control
MDT
Minimization of Drive Tests
M2M
Machine to Machine
MIMO
Multiple-input Multiple-output
MC
Monte Carlo
MCS
Modulation and Coding Scheme
MDP
Markov Decision Process
MONOTAS
Mobile Network Optimisation Through Advanced Simulation
MCC
Mobile Cloud Computing
MLB
Mobility Load Balancing
ML
Machine Learning
ML
Maximum Likelihood
MRO
Mobility RobustnessHandover Optimisation
MWC
Mobile World Congress
NB
Naive Bayes
NBs
Node Base station
NE
Network Element
NET-REFOUND
Network research foundations
NGMN
Next Generation Mobile Networks
NMS
Network Management Systems
NM
Network Management
NRM
Network Resource Model
NFV
Network Functions Virtualisation
OAM
Operations, Administration, and Maintenance
OFDMA
Orthogonal Frequency Division Multiple Access
OI
Overload Indicator
OMC
Operation and Maintenance Center
OSS
Operation and Support System
OPEX
Operational Expenditure
OMC
Operation and Maintenance Center
PCI
Physical Cell ID
PC
Principal Component
Pc
Power control
PCA
Principal component analysis
PCI
Automated Configuration of Physical Cell Identity
PItoRC
Policy Iteration to Resource Conflicts
PDF
Probability Density Function
PDSCH
Physical Downlink Shared Channel
PDU
Protocol Data Unit
PDSCH
Physical Downlink Shared Channel
PUSCH
Physical Uplink Shared Channel
PDCCH
Physical Downlink Control Channel
PGW
PDN Gateway
SGW
Serving Gateway
PF
Proportional Fair Scheduler
PHR
Power Headroom Report
PM
Performance Management
PRB
Physical Resource Block
PS
Packet Switched
PSD
Power Spectral Density
PU
Primary User
QoS
Quality of Service
QoE
Quality of Experience
QAM
Quadrature Amplitude Modulation
RBF
Radial basis function
RBM
Restricted Botlzmann Machine
RACH
Random Access Channel
RAN
Radio Access Network
RAT
Radio Access Technologies
RET
Remote Electrical Tilt
RF
Random Forest
RB
Resource Block
RBG
Resource Block Group
RBG
Restricted Boltzmann Machine
REM
Radio Environment Map
RL
Reinforcement Learning
RLF
Radio Link Failure
RNTP
Relative Narrowband Transmit Power
RLC
Radio Link Control
RRC
Radio Resource Control
RRM
Radio Resource Management
RS
Reference Signal
RSRP
Reference Symbol Received Power
RSRQ
Reference Symbol Received Quality
SAC
Situated Autonomic Communications
SDN
Self-Organized Network
SDSE
Strongly Dominant Strategy Equilibrium
SELFNET
Framework for Self-Organized Network Management in virtualized and Software Defined Networks
SEMAFOUR
Self-Management for Unified Heterogeneous Radio Access Networks
SESAME
Small cell coordination for multi-tenancy and edge services
SG
Stochastic Games
SGSN
Serving GPRS Support Node
SH
Self Healing
SINR
Signal to Interference and Noise Ratio
SLA
Service Level Agreement
SM
Saturation Mode
SML
Stochastic Maximum Likelihood
SPCA
Sparse Principal Component Analysis
SVMs
Support Vector Machines
SVR
Support Vector Regression
SDN
Software Defined Network
TCE
Trace Collection Entity
SL
Supervised Learning
SO
Self organization
SOCRATES
Self-Optimisation and self-ConfiguRATion in wirelEss networkS
SOFOCLES
Self-organized FemtOCeLls for broadband sErviceS
SOM
Self-organizing Map
SON
Self-organizing Network
SOS
Self organized System
SS
Solution Sets
SL
Supervised Learning
SIMO
Single-input Multiple-output
subMDP
Markov decision sub-process
TA
Timing Advance
TCE
Trace Collection Entity
TCP
Transmission Control Protocol
TD
Temporal Difference
TTT
Time to Trigger
TTI
Transmission Time Interval
TBS
Transport Block Size
TXP
Transmission Power
UDN
Ultra ­Dense Network
UE
User Equipment
UEs
User Equipments
UMTS
Universal Mobile Telecommunications System
UL
Uplink
UL
Unsupervised Learning
UTRAN
Universal Terrestrial Radio Access Network
WLAN
Wireless Local Area Network
WCQI
wideband CQI

I Introduction

Traditionally, and up to 4G, the evolution from one generation of mobile networks to another, has been driven by hardware technology advancements. The revolution of 5G is different, and novel advancements of software technology will be critical, especially in the way the network will be managed. With the advent of these software advancements, and unprecedented levels of computational capacity, the vision of autonomous network management can be put into practice taking advantage of also other cross-disciplinary knowledge advancements in the area of Machine Learning. This vision aligns with the concepts of self-awareness, self-configuration, self-optimization, and self-healing, which have already been defined in the area of network management. We give an special emphasis to the access segment of 4G cellular Long Term Evolution (LTE) network through the concept of Self-organizing Network (SON). SON is a common term used to refer to mobile network automation and minimization of human intervention in the cellular/wireless network management. This concept has been introduced by 3GPP in Release 8 and it has been expanding across subsequent releases. 3GPP work is inspired by a set of requirements defined by the operators’ Alliance Next Generation Mobile Networks (NGMN). The main objective of SON can be roughly classified into three main points: 1) to bring intelligence and autonomous adaptability into cellular networks, 2) to reduce capital and operation expenditures (CAPEX/OPEX), 3) to enhance network performances in terms of network capacity, coverage, offered service/experience, etc. SON is considered today as a driving technology that aims at improving spectral efficiency, simplifying management, and reducing the operation costs of the next generation Radio Access Networks. The overall complex SON problem has been decomposed in a set of useful use cases, which have been identified by 3GPP, NGMN, 5G Infrastructure Public Private Partnership (5GPPP) and different EU projects [Gandalf, SOCRATES, SEMAFOUR, SESAME, SELFNET, COGNET]. The academic literature has dedicated significant effort to SON algorithms in the context of the above mentioned use cases, providing smart solutions to optimize network operator performance, expenses and users’ experience. Many of these works are already reviewed here [survey]. The market also offers already complete sets of SON solutions, (e.g. [sistelbanda, qualcom, huawei, airhop], among others) many products have been advertised and presented in Mobile World Congress (MWC) 2016 and 2017 [cellwize, aviat]. For example, AirHop’s eSON from Jio AirHop communications [airhop], which employs a multi-vendor, multi-technology, real-time SON solution based on scalable and virtualized software platform has recently been awarded for the 2016 Small Cell Forum Heterogenous Network (Het-Net) management software and service award [smallcellForum].

However, to the best of our knowledge, the SON solutions available in the market are 1) mainly based on heuristics, 2) the automated information processing is usually limited to low complexity solutions like triggering, 3) many operations are still done manually (e.g. network faults are usually fixed directly by engineers), 4) SON solutions do not really capitalize on the huge amount of information that is available in mobile networks to build next generation network management solutions, and 5) several open challenges are still unsolved, like the problem of coordination of SON functions [3GPPwork, challengesSON], or the proper solution of the trade-off between centralized and distributed SON implementations [nsn]. In addition, this self-organized network management vision should be extended also beyond the RAN segment and should include all the segments of the network, from the access to the core, while fulfilling the requirements of different kind of vertical service instances. To achieve this vision, the networking world is exploring new directions. Network Functions Virtualisation (NFV) is expected to bring the economy of scale of the Information Technology industry to the Telecom industry. When combining NFV with Software Defined Network (SDN) principles, the benefits of programmability and flexibility are brought to the fore.

Another aspect that should be considered is that, as we observed in [bDemp] a huge amount of data is currently already generated in 4G networks during normal operations by control and management functions, and more is expected to come in 5G networks due to the densification process [densification], heterogeneity in layers and technologies, the additional control and management complexity in NFV and SDN architectures, the advent of Machine to Machine (M2M) and Internet of Things (IoT) paradigms, the increasing variety of application and services, each with distinct traffic patterns and Quality of Service (QoS)/ Quality of Experience (QoE) requirements, etc. 5G network management is expected to provide a whole new set of challenges due to 1) the need to manage future network complexity, due to ultra dense deployments, heterogeneous nodes, networks, applications, RANs coexisting in the same setting, 2) the need to manage very dynamic networks, where operators may do not have any control in the deployment of some nodes (e.g. femto-cells), energy saving policies are in place generating a fluctuating number of nodes, active antennas are a reality, etc. 3) the need to support x traffic, and x users, and improve energy efficiency, 4) the need to improve the experience of the users by enabling Gbps speeds, and highly reduced latency, 5) the need to manage new virtualized architectures, 6) the need to handle heterogeneous spectrum access privileges through novel LTE-Unlicensed (LTE-U), Licensed Assisted Access (LAA), MuLTEFire paradigms and the availability of both traditional sub - 6 GHz bands, and above 6 GHz mmWave bands. In this challenging context, we believe that the use of SON and of smart network management policies is crucial and inevitable for operators running multi-RAT, multi-vendor, multi-layer networks, where an overwhelming number of parameters need to be configured and optimized. The high level objective is to make the networks 1) more self-aware, by exploiting the information already available in the network to gain experience in the network management, 2) more self-adaptive, by exploiting intelligent control decisions procedures which allow to automatize the decision processes based on the experience.

We believe that Machine Learning (ML) can be effectively used to allow the network to learn from experience, while improving performance. In particular, big data analytics, through the analysis of data already generated by the network, can pursue the self-awareness by driving the network management from reactive to predictive. Big data analytics are currently receiving big attention in research and in the market, due to their capability of providing insightful information from the analysis of data already available to operators.

In this paper, we will not focus on these uses of data analytics and ML, but we will only focus on the applications to 4/5G network management. Differently from other surveys on SON proposed before [survey, dressler] or from other surveys related with 5G network management [SDN-survey], we focus here on the study and analysis of the available literature on SON and network management considering ML as the tool to implement automation and self-organization, from a 5G perspective. We review and provide the basic concepts and taxonomy of traditional SON and 5G network management in Section II. We pay special attention to the evolution of 3GPP in the area, following its nomenclature, and referring to the specific use cases defined by the standard in this matters. We then provide, in Section III, guidelines to select the most appropriate ML algorithm and approach, based on the network management issue to address. In Section IV, we review the main sources of information relevant for a knowledge based network management, data that is actually already generated by the network, and we survey the literature on ML-based network management. We then highlight challenges for future works in Section V. Finally, Section VI concludes the survey.

Ii Self Organizing Network (SON) and Network Management

SON is a key driver to maximize total performance in cellular networks. The main idea is to bring into them intelligence and autonomous adaptability by diminishing human involvement, while enhancing network performance, in terms of network capacity, coverage and service quality. It aims at reducing the cost of installation and management by simplifying operational tasks through the capability to configure, optimize and heal itself.

The main motivation behind the increasing interest in the introduction of SON from operators, standardization bodies and projects is twofold. On the one hand, from the market perspective, the ever increasing demand for a diversity of offered services, and the need to reduce the time to market of innovative services, further add to the pressure to remain competitive by effectuating cost reductions. On the other hand, from the technical perspective, the complexity and large scale of future radio access technologies imposes significant operational challenges due to the multitude of tunable parameters and the intricate dependencies among them. In addition, the advent of heterogeneous networks is expected to tremendously increase the number of nodes in this new ecosystem, so classic manual and field trial design approaches are just impractical.

Similarly, manual optimization processes or fault diagnosis and cure, performed by experts are no longer efficient and need to be automatized, as this causes time intensive experiments with limited operational scope, or delayed, manual and poor handling of cellsites failures. Key operational tasks, such as radio network planning and optimization are largely separated nowadays and this causes intrinsic shortcomings like the abstraction of access technologies for network planning purposes, or the consideration of performance indicators that are of limited relevance to the end user’s service perception. These problems have been approached through SON by European projects such as SOCRATES [SOCRATES], and Gandalf [Gandalf]. Also FP7 and 5GPPP EU projects have been dealing with SON. In particular FP7 SEMAFOUR [SEMAFOUR], which develops a unified self-management to operate complex HetNets. Among 5GPPP projects, we highlight SESAME [SESAME], which proposes the Cloud-Enabled Small Cell (CESC) concept, i.e., a new multi-operator enabled Small Cell by deploying Network Functions Virtualisation (NFV), supporting powerful self-management inside the access network infrastructure. In terms of self-organized network management in SDN and NFV, the article project aims at enabling the use of ML to achieve real time autonomous 5G network management [SELFNET]. In particular, the project explores a smart integration of state-of-the-art technologies in SDN, NFV), SON, Cloud computing, Artificial intelligence. The COGNET project [COGNET] has similar objectives and aims at developing several operators’ use cases by applying ML algorithms.

Fig. 1: Self-organizing networks

SON has been introduced by 3GPP as a key component of LTE network starting from the first release of this technology in Release 8, and expanding to subsequent releases. In SOCRATES [SOCRATES] and in 3GPP [3GPPUSES], meaningful SON use cases have been defined, which can be classified according to the phases of the life cycle of a cellular systems (planning, deployment, maintenance and optimization) into: self-configuration, self-healing and self-optimization, as depicted in Figure 1. In this section, first we give an overview of the evolution of SON in 3GPP. We go through self-configuration, optimization and healing functionalities, introducing the use cases that have been defined for each one of them. We discuss about the self-coordination problem, to handle the potential conflicts that may exist between the parallel execution of multiple SON functions. We present the Minimization of Drive Tests (MDT) functionality. Finally, we focus on and end-to-end vision by extending SON principles to the core, and we discuss the role of virtualized and software defined networks in the context of 5G Network Management. Notice that here we do not focus on the academic literature, as it has already been reviewed in other interesting works [survey]. We focus on the taxonomy defined by 3GPP, on the related roadmap, and we pay attention to the market penetration.

{adjustbox}

width= Release WI Feature TS or TR Rel.8 SA5-SON concepts and requirements SON concepts and requirements [TS32.500] Rel.8 SA5-Self establishment of eNBs Self configuration [TS32.501, TS32.502, TS32.503, TS32.531, TS32.532, TS32.533] Rel.8 SA5-SON Automatic Neighbour Relation (ANR) list management ANR, PCI [TS32.761, TS32.762, TS32.763, TS32.765] Rel.9 SA5: Study of SON related OAM Interfaces for HeNBs SON related OAM Interfaces for HeNBs [TS32.821] Rel.9 SA5: Study of self-healing of SON Self-healing management [TR32.823] Rel.9 SA5:SON OAM aspects: Automatic radio network configuration data preparation Automatic radio network configuration data preparation [TS32.501, TS32.502, TS32.503] Rel.9 SA5:SON OAM aspects self-organization management Self-optimization (MRO, MLB, ICIC) [TS32.425] Rel.9 RAN3: Self-organizing networks CCO, MRO, MLB, RACH opt. [TS25.413, TS36.300, TS36.413, TS36.423] Rel.10 SA5: SON self-optimization management continuation Self-coordination, self-optimization (MRO, MLB, ICIC, RACH opt.) [TS32.425, TS32.522, TS32.526, TS32.762, TS32.766] Rel.10 SA5: Self-healing management CCO, COC [TS32.541] Rel.10 SA5: OAM aspects of ES in radio networks ES [TR32.826, TR32.834, TS32.551, TS32.425, TS32.762, TS32.763, TS32.765] Rel.10 RAN2-3: LTE SON enhancements CCO, ES, MLB, MRO enhancements [TS36.300, TS36.331, TS36.413, TS36.423] Rel.11 SA5: ULTRAN SON management SON management [TS32.405, TS32.500, TR32.511, TS32.521, TS32.522, TS32.526, TS32.642] Rel.11 SA5: LTE SON coordination management SON coordination [S5-122330] [TS32.425, TS32.500, TS32.521, TS32.522, TS32.526, TS32.762] Rel.11 SA5: Inter RAT ES management OAM aspects of ES management [TS32.405, TS32.551, TS32.522, TS32.526, TS32.642, TS32.646] Rel.11 RAN3: Further SON enhancements MRO, MDT enhancements [TS25.331, TS25.401, TS25.410, TS25.413, TS25.423, TS36.300, TS36.331, TS36.413, TS36.423] Rel.12 SA5: Enhanced NM centralized CCO Enhanced NM centralized CCO [TS32.836, TS32.425, TS32.103, TS28.627, TS28.628, TS28.658, TS28.659] Rel.12 SA5: Multi-vendor plug and play eNB connection to the network Multi-vendor plug and play eNB connection to the network [TS32.501, TS32.508, TS32.509] Rel.12 SA5: Enhancements on OAM aspects of distributed MLB OAM aspects of distributed MLB [TR32.838] Rel.12 SA5: Energy efficiency related performance measurements Energy efficiency related performance measurements [TS32.425] Rel.12 SA5: Het-Nets managementOAM aspects of network sharing Het-Netsnetwork sharing [TR32.835, TR32.851] Rel.12 RAN2-3: Next generation SON for ULTRANEUTRAN SON per UE type, active antennas, small cells [TR37.822] Rel.12 RAN2-3: ES enhancements for EUTRAN ES [TR36.887] Rel.13 RAN2-3: Enhanced Network Management centralized CCO CCO [TS28.627] Rel.13 SA5: Study on Enhancements of OAM aspects of Distributed Mobility Load Balancing SON function MLB [TR32.860] Rel.14 RAN: OAM (SON for Active Antenna Systems (AAS)-based deployments) Energy efficiency [rel14, ran14]

TABLE I: Evolution of SON in 3GPP

Ii-a SON evolution in 3GPP

3GPP Release 8 started defining LTE and already sets the basis for concepts and requirements, and for SON functionalities regarding self-configuration, initial equipment installation and integration. The ANR functionality is introduced here to reduce manual work when configuring the neighbouring list in newly deployed eNBs. Concepts of self-optimization are defined in the context of Release 9. It includes optimisation of coverage, capacity, handover and interference. The functions which are introduced (and that will be detailed in the following sections) are Mobility Load Balancing (MLB), Mobility RobustnessHandover Optimisation (MRO), Inter-Cell Interference Coordination (ICIC) and Random Access Channel (RACH) optimization. Release 10 focuses on enhancements to already defined SON functions to enhance interoperability between small cells and macro-cells and includes NGMNs recommendations, i.e., new functionalities such as Coverage and Capacity Optimization (CCO), enhanced ICIC, and it defines all the concepts related to self-healing, so Cell Outage Detection (COD) and Cell Outage Compensation (COC) functions. Finally, concepts of MDT and Energy Saving (ES) are also introduced and then enhanced in Release 11. Release 11 SON functions are related to the automated management of heterogeneous networks. It includes mobility robustness optimization enhancements and inter-radio access technology Handover (HO) optimization. Release 12 introduces optimization and enhancements for small cells including deployments in dense areas. In Release 13 novel concepts of unlicensed LTE have been introduced. Besides that, Release 13 studied the enhancements of OAM, with respect to centralized and distributed architecture. In particular, focuses on distributed MLB, as well as on enhanced NM or centralized CCO. Finally, Release 14 focuses on meeting the 5G requirements in terms of latency reduction, use of unlicensed spectrum in a fair manner, support for carrier aggregation, energy efficiency at OAM level, SON for active antennas, etc. Table I summarizes the evolution of SON in 3GPP. Other documents of interest also include the protocol neutral SON policy Network Resource Model (NRM) Integration Reference Point (IRP), with the Information Service (IS) [TS32.522, TS32.762] and Solution Sets (SS) [TS32.526, TS32.766].

Fig. 2: SON implementations.

Ii-B Self Configuration

Self-configuration is the process of bringing a new network element into service with minimal human operator intervention [TS32.501]. This covers the cellular system life cycle phase related to planning and deployment. Self-configuring algorithms take care of all configuration aspects of the Enhanced Node Base station (eNB). When the eNB is powered on, it detects the transport link and establishes a connection with the core network elements. After this, the eNB is ready to establish OAM, S1 and X2 links and finally sets itself in operational mode. After the eNB is configured, it performs a self-test to deliver a status report to the network management node. Since Release 8 ANR and Automated Configuration of Physical Cell Identity (PCI) use cases have been considered [TR30.818, anrpci]. The ANR function resides in the eNB and manages the conceptual Neighbour Relation Table (NRT). Located within ANR, the Neighbour Detection Function finds new neighbours and adds them to the NRT. ANR also contains the Neighbour Removal Function which removes outdated NRs. The Neighbour Detection Function and the Neighbour Removal Function are implementation specific [parodi]. The PCI is a physical layer signature to distinguish signals from different eNBs. It is based on synchronization signals. The total number of PCIs is LTE is 504, so that reuse is inevitable, especially in dense deployments. The Automatic PCI assignment aims at an automatic conflict and confusion free identification of cells [TR36.902]. Recommended practices for both use cases can be found in [ngmn14].

Fig. 3: High-level example of how the iterations of multiple SON functions may interfere.

Ii-C Self Optimization

Self-optimization embraces all the set of mechanisms which optimize the network parameters during operation, based on measurements received from the network. In the following we provide a brief overview of the main self-optimization function that have been introduced across the different recent releases [TR36.902]. From Release 9, we highlight work on:

  1. MLB. The MLB is the SON function in charge of managing cells’ congestion through load transfer to other cells. The main objective is to improve the end-user experience and achieve higher system capacity by distributing user traffic across the system radio resources. The implementation of this function is generally distributed and supported by the load estimation and resource status exchange procedure. The messages containing useful information for this SON function (resource status request, response, failure and update) are transmitted over the X2 interface [TS36.423]. MLB can be implemented by tuning the Cell Individual Offset (CIO) parameter. The CIO contains the offsets of the serving and the neighbour cells that all UEs in this cell must apply in order to satisfy the A3 handover condition [TS36.213].

  2. MRO. The MRO is a SON function designed to guarantee proper mobility, i.e. proper handover in connected mode and cell re-selection in idle mode. Among the specific goals of this function we have the minimization of call drops, the reduction of Radio Link Failures, the minimization of unnecessary hand-overs, ping-pongs, due to poor handover parameters settings, the minimization of idle problems. Its implementation is commonly distributed. The messages containing useful information are: the S1AP handover request or X2AP handover request, the handover report, the RLF indication/report. Release 11 focused on different improvements of the handover optimization [4gngmn]. MRO operates over connected mode and idle mode parameters. In connected mode, it tunes meaningful handover trigger parameters, such as the event A3 offset (when referring to intra-RAT, intra-carrier hand-overs), the Time to Trigger (TTT), or the Layer 1 and Layer 3 filter coefficients. In idle mode, it tunes the offset values, such as the Qoff-set for the intra-RAT, intra-carrier case.

  3. Inter-Cell Interference Coordination (ICIC). ICIC aims to minimize interference among cells using the same spectrum. It involves the coordination of physical resources between neighbouring cells to reduce interference from one cell to another. ICIC can be done in both uplink and downlink for the data channels Physical Downlink Shared Channel (PDSCH), and Physical Uplink Shared Channel (PUSCH), or uplink control channel Physical Downlink Control Channel (PDCCH). ICIC can be static, semi-static or dynamic. Dynamic ICIC relies on frequent adjustments of parameters, supported by signalling among cells over X2 interface. To support proactive coordination among cells the High Interference Indicator (HII) and the Relative Narrowband Transmit Power (RNTP) indicators have been defined, while to support reactive coordination, the Overload Indicator (OI) has been introduced [TS36.423].

  4. RACH. RACH optimization aims at optimizing the random access channels in the cells based on UE feedback and knowledge of its neighbouring eNBs RACH configuration. RACH optimization can be done by adjusting the Power control (Pc) parameter or change the preamble format to reach the set target access delay [36.300].

In Release 10, 3GPP defined new use cases.

  1. Coverage and Capacity Optimization (CCO) is a SON function that aims to design self-optimizing algorithms that achieve optimal trade-offs between coverage and capacity. Different mechanisms can be considered to dynamically improve coverage and capacity, such as ICIC, scheduling, and the combination of such mechanisms. The targets that can be optimized may be vendor dependent and include coverage, cell throughout, edge cell throughput, or a weighted combination of the above.

  2. ES aims at providing the quality of experience to end users with minimal impact on the environment. The objective is to optimize the energy consumption, by designing Network Elements with lower power consumption and temporarily shutting down unused capacity or nodes when not needed [TR32.826]. In particular, many works in literature have been focusing on switching ON/OFF eNBs or small cells, in an efficient way, in order to guarantee a target level of Quality of service/experience, while minimizing the dissipated energy.

Release 11 provides enhancements to MLB optimization, HO optimization, CCO, and ES. Release 12 has focused on a study on enhancements of OAM aspects for distributed MLB [TR32.860].

Ii-D Self-healing

Self-healing [TS32.541] focuses on the maintenance phase of a cellular network. Wireless cellular systems are prone to faults and failures, and the most critical domain for fault management is the RAN. Every eNB is responsible for serving an area, with little or none redundancy. If a NE is not able to fulfill its responsibilities, it results in a period of degradation of performances, during which users are not receiving a proper service. This results in severe revenue loss for the operator. Self-healing was initially studied in Release 9 [TR32.823], but it is in Release 10, when the main work has been carried out and features for detection, and adjustment of parameters have been specified [S5-460036]. These specifications have been further updated in Release 11 [TS32.541]. The main defined use cases are the following.

  1. Self-recovery of NE Software. If the NE software failed due to load earlier software version andor configuration, the most important thing is to ensure that the NE runs normally by removing the fault software, and restoring the configuration.

  2. Self Healing of board Faults. This use case aims to solve hardware failures in the NE [son].

  3. Cell Outage Management. This use case is split in two main functions: 1) Cell Outage Detection. The main objective here is to detect a cell outage through the monitor performance indicators, which are compared against thresholds and profiles, and 2) Cell Outage Compensation. This use case aims at alleviating the outage caused by the loss of a cell from service [TS32.541]. It refers to the automatic mitigation of the degradation effect of the outage by appropriately adjusting suitable radio parameters, such as the pilot power and the antenna parameters of the surrounding cells.

Ii-E Self Coordination

SON functionalities are often designed as stand-alone functionalities, by means of control loops. When they are executed concurrently in the same or different network elements, the impact of their interactions is not easy to be predicted, and unwanted effects may even occur among instances of the same SON function, when implemented in neighbouring cells. The risk of unacceptable oscillations of configuration parameters or undesirable performance results increase with the number of SON functions.

3GPP has proposed different architectures for SON implementation, ranging from centralized C-SON to distributed D-SON. The choice of the architecture has a strong impact on the efficiency of the self-coordination framework. If C-SON is used, SON functions are implemented in the Operation and Maintenance Center (OMC) or in the Network Management Systems (NMS), as part of the Operation and Support System (OSS). This implementation benefits from global information about metrics and Key Performance Indicator (KPI)s, as well as computational capacity to run powerful optimization algorithms involving multiple variables or cells. However, it suffers from long time scales. In order to avoid oscillations of decision parameters, 3GPP requires [S5-122330] that each SON function asks for permission before changing any configuration parameter. This means that a request must be sent from the SON function to the SON coordinator and a response has to be returned. In Centralized SON (C-SON) all these requests must pass through the Interface-N, which is not suitable for real-time communication, so that there is no possibility to give priority to SON coordination messages over other OAM messages. If in turn, distributed coordination is used, the interaction between the SON function and the local SON coordinator will be over internal vendor-specific interfaces, with much lower latency characteristics. This makes the Distributed SON (D-SON) architecture much more flexible and adequate for small cell networks, which experience very transitory traffic loads, thus requiring high reactivity to propagation and traffic conditions.

An example of this can be observed in Figure 3, where we provide an analysis of how the iterations among several SON functions implemented in centralized and distributed manner can generate conflicts in the network. In particular, this figure focuses on the SON output parameter conflict, i.e., when two or more SON functions aim at optimizing the same output parameter with different actions request, and where at least three possible conflicts can arise: 1) the resource conflict between MRO and MLB; 2) the one among CCO and ICIC, and/or 3) the one among COC and ICIC use cases. We can identify output parameters, which are affected by two opposite decisions of two different functions, trying to achieve their own targets. As a result, to define and implement a self-coordination framework is considered a necessity [schmelz, SOCRATES],[altmanCoord].

Market implementations of C-SON are offered by vendors like Celcite (acquired by AMDOCS), Ingenia Telecom and Intucell (acquired by Cisco), while D-SON solutions have traditionally been more challenging to implement and vendor specific, not allowing for easy interaction of products from different vendors, so that a supervisory layer is commonly still needed to coordinate the different instances of D-SON across a much broader scope and scale. Only recently, vendors like Qualcomm or Airhop have started proposing D-SON as a SON mainstream, as small cells and Het-Net require the millisecond response times of D-SON.

Ii-F Minimization of Drive Tests

MDT enables operators to collect User Equipments measurements together with location information, if available, with the purpose of optimizing network management while reducing operational effects and maintenance costs. This feature has been studied by 3GPP since Release 9 [TR36.805], among the targets there are the standardization of solutions for coverage optimization, mobility, capacity optimization, parametrization of common channels, and QoS verification [son]. Since operators are also interested in estimating QoS performance, in Release 11, MDT functionality has been enhanced through QoS performance to properly dimension and plan the network by collecting measurements indicating throughput and connectivity issues [MDT3GPP]. These MDT functions have been further elaborated in Release 11, while Release 12 has included specific enhancements in terms of correlation of information, which can be found in the study on enhanced network management centralized CCO. These improvements and extensions of SON enhancements introduced until Release 13 can be found in [TS32.500].

Ii-G Core networks

The core network operations can be managed through self-organizing functionalities. The benefits also in this case come from the reduced human intervention and from reduced operational costs. self-organization in the core network allows to self-adapt traffic loads and prevent bottlenecks. In addition, self-organization for Core enables the core network to handle signalling more efficiently. In this regard, Nokia [nokiacore] already automates core networks operations based on SON technology. The objective is to automatically and rapidly allocate core network resources to meet unpredictable behaviours and demands in terms of broadband. Notice that SON use cases for core networks are not limited to LTE networks, but many of them can be taken into account also for other kinds of networks, like 2/3G.

Ii-H Virtualized and Software defined networks

The wireless industry is currently working towards being prepared for a 1000x data traffic growth. It is unlikely, though, that users will want to pay more for the service than they are paying today, which set a serious challenge for both mobile operators and vendors, i.e. how to improve the infrastructure 1000 times, without increasing the CAPEX and OPEX. Besides SON, another trend in this direction, initiated by an ETSI industrial study group in 2012, is the NFV, which allows to exploit the economies of scale of the IT industry, by moving traditional network functions away from specialized hardware to general purpose computation, storage and memory pools, distributed throughout the network and in data centers. NFV virtualizes the functional elements of the network, instantiating the corresponding functions as programs that run on commercial off-the shelf, and less expensive hardware. This concept, combined with a SDN architecture, is introduced to make mobile network deployments more cost-effective [SDN-survey],[survey-NFV].

The main idea behind these novel architectures is to provide a framework capable of assisting network operators to solve management problems, such as, cyber attacks, network failures, optimization to improve network performance, and QoE of the users, among others. In this context, SON can be useful to achieve real time autonomous network management. In this novel softwerized visions, we can benefit of all the opportunities offered by centralized, distributed and local implementations proposed for SON at RAN level, to extend this view beyond the radio access border, by proposing a SON over NFV architecture, where SON functions, aimed at tackling the main radio access and backhauling challenges of extremely dense deployments, are virtualized and run over generic purpose hardware. The NFV infrastructure is to be managed by an orchestrator entity, as proposed in ETSI architecture. Out of all the NFV architecture entities, this is the brain with the broadest view of the vertical service characteristics and the resource availability in the network. Therefore, it coordinates the allocation of functions across the different segments of the dense, heterogeneous network. At the methodological level, the orchestrator can take advantage of the huge amount of information travelling through the network, in terms of measurements, signaling information, QoS and QoE indicators, etc., by means of machine learning based approaches.

In the market there already exist start-ups which advertise the concept of C-SON in the cloud. SON over NFV eliminates software and hardware dependencies, besides system scaling limitations, and offers reduction of costs through automatic processes. Cellwize [cellwize] is one of them. They are promising a technology with deployment in the cloud, capable of working seamlessly across different vendors, spectrum and technologies. This research line is extremely novel and not much work can be found so far. However, we highlight the work that is under development in the context of the article and COGNET projects [SELFNET], [COGNET].

Iii How to address SON and NM through ML

In this section we classify at high level the different network management classes of problems that one may need to deal with when aiming at managing the network in a self-organized manner. For each class of problem, we identify the machine learning tools that can be used. The objective of ML is to improve performance of a particular sets of tasks by creating a model that helps find patterns through learning algorithms. ML taxonomy is traditionally organized onto: 1. Supervised Learning (SL), 2. Unsupervised Learning (UL), 3. Reinforcement Learning (RL). Recently, new trends in the area of ML are taking momentum, thanks to the progress of software engineering, computational capabilities and memory availability. Deep learning has been proven feasible and extremely effective in different applications, like language, video, speech recognition, object and audio detection, among others. The most exemplary one is the win of AlphaGo, beating the world champion at the Chinese board game Go. The victory of AlphaGo was due to the implementation of a deep reinforcement learning algorithm capable of self-learning.

Keeping in mind the SON and NM functions introduced in the previous section, the classes of problems that need to be addressed when managing the network autonomously are:

  • Variable estimation or classification: The tasks belonging to this class of problem aim at e.g. estimating the QoS or the QoE of the network, at predicting performances or behaviours of the network, by learning from the analysis of data obtained from past behaviours of the network. NM and SON functions where these tasks are useful are QoS estimation and other MDT use cases, the prediction of behaviours to optimize network parameters, etc. Solutions to these problems can be translated into finding the relationship between one variable and some others, or Identifying which class of a set of pre-defined classes the data belongs to. Solutions are then to be found in the SL literature, with both regression and classification tasks.

  • Diagnosis of network faults or misbehaviours: The tasks belonging to this class of problems aim at detecting issues ongoing in the network, which may be associated to faults and anomalous setting of network parameters. This kind of problems relates to self-healing issues and solutions can be found in UL literature, and in particular in the anomaly detection solution.

  • Dimensionality reduction: The network generates continuously a huge amount of data. For an appropriate processing and to extract useful information, it is convenient to eliminate the noise present in the data base, by reducing the dimensionality of data. Solutions to this problem are to be found in the UL literature, and specifically among the dimensionality reduction solutions.

  • Pattern identification, grouping: The tasks belonging to this class aim at identifying patterns, group of nodes with similar characteristics, according to some kind of criteria. An objective may be to apply to them similar optimization approaches. Self-configuration use cases are intuitive application for these issues. Solutions to these problems can be translated into learning the set of classes the data belongs to. UL literature offers solutions in the area of clustering.

  • Sequential decision problems for online parameter adjustment: This class of problems is extremely common in the area of autonomous management, where we face control decision problems to online adjust network parameters, with the objective to meet certain performance targets. This kind of decision problems, where we learn the most appropriate decision online, based on the reaction of the environment to the actions the network is taking, can be addressed through RL solutions. All self-optimization use cases can be addressed through these solutions, as well as COC problems.

In the rest of the section, we relate each class of NM problem to the possible ML literature to solve it. The review of ML literature provided in the following, is far from being exhaustive. Many methods and techniques will not be described, because the purpose is here to provide a useful taxonomy to address NM and SON problems and to analyze and understand the related literature using ML solutions. For a deeper understanding of ML solutions, the reader is referred to more specific literature.

Iii-a Supervised Learning (Sl)

This ML technique could be extremely useful when the NM function to address requires estimation, prediction, classification of variables. SL is a ML technique which takes training data (organized into an input vector (x) and a desired output value ()) to develop a predictive model, by inferring a function , returning the predicted output . For that, the construction of a dataset is needed. The dataset contains training samples (rows), and features (columns), and is usually divided into 2 sets. The training set, used to train the model, and the test set, used to make sure that the predictions are correct. The goal of the training model is to minimize the error between the predictions and the actual values. Hence, by applying ML, we aim to estimate how well a learning algorithm generalizes beyond the samples in the training set. The input space is represented by a -dimensional input vector . Each dimension is an input variable. In addition a training set involves training samples . Each sample consists of an input vector , and a corresponding output . Hence is the value of the input variable in training sample , and the error is usually computed via . The SL technique has two main applications, classification and regression. On the one hand, classification is applied when , the output value we try to predict is discrete, e.g., we want to predict if a cancer is benign or malign, based on a dataset constructed based on medical records, and collecting many features, e.g. tumour size, age, uniformity of cell size, uniformity of cell shape. On the other hand, a regression problem is applied when is a real number.

A huge amount of SL algorithms for classification can be found in the literature, and a study to evaluate the performance of some of them can be found in [caruana]. In the following we briefly introduce the most common algorithms.

  1. -Nearest Neighbors (-NN) can be used for classification and regression. -NN is a non-linear method where the input consists of the closest training samples in the input space. The predicted output is the average of the values of its nearest neighbours. A commonly used distance metric for continuous variables is the Euclidean distance. The -NN method has the advantage of being easy to interpret, fast in training, and the amount of parameter tuning is minimal. However, the accuracy of the prediction is generally limited.

  2. Generalized Linear Models (GLM). The linear model describes a linear relationship between the output and one or more input variables, and where the approximation function maps from to as follows,

    (1)

    where are the unknown parameters. The idea is to choose so that minimizes the loss function. Typically, we make the assumption that the samples in each dataset are independent from each other, and that the training set and testing set are identically distributed. Note that if the relation is not linear, the model should be generalized, in an attempt to capture this relationship [glm].

  3. Naive Bayesian. The method is used for classification and is based on Bayes theorem, i.e., calculating probabilities based on the prior probability. The main task is to classify new data points as they arrive. A NB classifier assumes that all attributes are conditionally independent, and is recommended when the dimensionality of the input is high [nb]. Since NB assumes independent variables, it only requires a small amount of training data to estimate the means and variances of the variables.

  4. Support Vector Machines (SVMs) can be used for classification and regression. SVMs are inspired by statistical learning theory, which is a powerful tool for estimating multidimensional functions [statistical, smola]. This method can be formulated as a mathematical optimization problem, which can be solved by known techniques. For this problem, given training samples , the goal is to learn the parameters of a function which best fit the data. It samples hyperplanes. Thus, the hyperplane with the main minimum distance from the sample points is maintained. The sample points that form margin are called support vectors and establish the final model. This method in general shows high accuracy in the prediction, and it can also behave very well with non-linear problems when using appropriate kernel methods. Also, when we cannot find a good linear separator, kernel techniques are used to project data points into a higher dimensional space where they can become linearly separable. Hence the correct choice of kernel parameters is crucial for obtaining good results. In practice, this means that an exhaustive search must be conducted on the parameter space, thus complicating the task [andreas].

  5. Artificial Neural Network (ANN) is a statistical learning model inspired by the structure of a human brain, where the interconnected nodes represent the neurons producing appropriate responses. ANN supports both classification and regression algorithms. The basic idea is to efficiently train and validate a neural network. Then, the trained network is used to make a prediction on the test set. In this method the weights are the parameters in charge of manipulating the data in the calculations. Here, the interconnection pattern between the different layers of neurons, the learning process for updating the weights of the interconnections, and the activation function that converts a neuron’s weighted input to its output activation are the most important parameters to be trained [bishop]. ANNs methods require parameters or distribution models derived from the data set, and in general they are also susceptible to over-fitting.

  6. Decision Trees (DT) is a flow-chart model in which each internal node represents a test on an attribute. Each leaf node represents a response, and the branch represents the outcome of the test [idt]. DTs can be used for classification and regression, and they have nuisance parameters, such as the desired depth and number of leaves in the tree [dt]. Also, they do not require any prior knowledge of the data, are robust (i.e., do not suffer the curse of dimensionality as they focus on the salient attributes) and work well on noisy data. However, DTs are dependent on the coverage of the training data as with many classifiers. Moreover, they are also susceptible to over-fitting.

  7. Hidden Markov Model (HMM) can be used for classification, and also for other purposes. They can be used as a Bayesian classification framework, with a probabilistic model describing the data.

Methodologies have also been proposed to take the best out of the available data, to boost the prediction performance. Some of these methodologies are classified among the so called Ensemble methods. Ensemble methods combine the predictions of multiple learning algorithms to produce a final prediction. This technique has been investigated in a huge variety of works [em, em2]. A general method is sub-sampling the training examples, where the most useful techniques are referred to as bagging and boosting [em3]. Bagging manipulates the training examples to generate multiple hypothesis. It runs the learning algorithm several times, each one with different subset of training samples. On the other hand, AdaBoost maintains a set of weights over the original training set, and adjusts these weights by increasing the weight of samples that are misclassified, and decrease the weight of samples that are correctly classified [mlresearch, freundSchapire].

Iii-B Unsupervised Learning (Ul)

This kind of learning can be extremely useful when the NM function requires identifying anomalous behaviours, recognizing patterns or reducing the dimensionality of the data. UL is a ML technique, which receives unlabelled input patterns with the objective to find a pattern in it. In this case, we let the computer learn by itself, without providing the correct answer to the problem we want to solve. The goal is to construct representation of inputs that can be used for predicting future inputs without giving the algorithm the right answer, as in turn we do in case of supervised learning [ul]. The three most important families of algorithms are clustering, dimensionality reduction and anomaly detection techniques. There are many examples of UL applications in our daily life, e.g., news.google.com, understanding genomics, organize computer clusters, social network analysis, astronomical data analysis, market segmentation, etc. In the context of SON, UL algorithms are applied mainly on self-optimization and self-healing use cases.

  1. Clustering. This technique aims at identifying groups of data to build representation of the input. The most common methods to create clusters by grouping the data are: non-overlapping, hierarchical and overlapping clustering methods. K-means [kmeans] and Self-organizing Maps [som] methods belong to non-overlapping clustering techniques. When the clusters at one level are joined as clusters at the next level (cluster-tree), this is referred in literature as a hierarchical clustering method [hc]. In case that an observation can exist in more than one cluster simultaneously, this is known as overlapping or fuzzy clustering. Fuzzy C-means and Gaussian mixture models belong to this kind of technique [kmeans, fuzzyCmeans]. Also HMM can be used for clustering This kind of algorithms have been proposed in a wide range of fields, such as, robotics, wireless systems, and routing algorithms for mobile ad-hoc networks, among others.

  2. Dimensionality Reduction. High-dimensional datasets present many challenges. One of the problems is that, in many cases, not all the measured variables are necessary to understand the problem of interest. In the state of the art we can find a huge amount of algorithms to predict models with good performance from high-dimensional data. However it is of interest for many problems to reduce the dimension of the original data. For example, in [jmoysenCAMAD, jmoysenHindawi], the authors face the problem of the huge amount of potential features the system has as input, and they suggest that the regression analysis has a better performance in a reduced space. In this context, the most common methods are: Feature Extraction (FE) and Feature Selection (FS) [dr]. Both methods seek to reduce the number of features in the dataset. FE methods do so by creating new combinations of features (e.g. Principal component analysis (PCA)), which project the data onto a lower dimensional subspace by identifying correlated features in the data distribution. They retain the Principal Components with the greatest variance and discard all others to preserve maximum information and retain minimal redundancy [pca]. Correlation based FS methods include and exclude features present in the data without changing them. For example, Sparse Principal Component Analysis (SPCA) extends the classic method of PCA for the reduction of dimensionality of data by adding sparsity constraint on the input features.

  3. Anomaly Detection. Anomaly detection identifies events that do not correspond to an expected pattern. By modeling the most common behaviors, the machine selects the set of unusual events [anomalyDet]. Self healing is one of the main functionality in which this kind of techniques are applied, some examples are [banderaLG, munozLG]. The two most common techniques are:

    • Rule based systems: they are very similar to DTs, but they are more flexible than DTs as new rules may be added, without creating a conflict with the existing ones [anomalyDet].

    • Pruning techniques: they aim at identifying outliers, where there are errors in any combination of variables.

  4. Latent Variable models. This kind of techniques allows learning a model where some unseen variable helps simplify and describe the data. An example is the non negative matrix factorization.

Iii-C Reinforcement Learning (Rl)

The ML approaches under this category can be used to address NM functions which require network parameter control. Differently from the case of SL, RL aims to learn from interactions how to achieve a certain goal. In many real applications and in particular, in sequential decision and control problems, it is not possible to provide an explicit supervision to the training (i.e. the right answer to the problem). In these cases, we can only provide a reward/cost function, which indicates to the algorithm when it is doing well and when it is doing poorly. RL has already been proven effective in many real world applications, such as autonomous helicopters, network routing, robot legged automation, etc. [schaal, thrun, littman].

The learner or decision maker is called agent, and it interacts continuously with the so-called environment. The agent selects actions and the environment responds to those actions and evolves into new situations. In particular, the environment responds to the actions through rewards, i.e., numerical values that the agent tries to maximize over time.

The agent has to exploit what it already knows in order to obtain a positive reward, but it also has to explore in order to take better actions in the future. Learning can be centralized in a single agent or distributed across a multiple agents. In single agent systems, ML approaches are capable of finding optimal decision policies in dynamic scenarios with only one decision maker. In multi agent systems, the distributed decisions are made by multiple intelligent decision makers, and the optimal solutions or equilibria are not always guaranteed [panait].

The problem is then defined by means of a Markov Decision Process (MDP) , where , is the set of possible states of the environment , , is the set of possible actions that each decision maker may choose, , is the transition function denoting the probability of getting giving an action in state , is a reward function, which specifies the expected immediate return obtained by executing action in state , and is a discount factor, which gives more importance to immediate rewards compared to rewards obtained in the future [suton].

The MDP represents the theoretical basis for the RL framework [suton]. At each time step, the agent implements a mapping from states to probabilities of selecting each possible action. This mapping is the agent’s policy.

The objective of each learning process is to find an optimal policy for each , to maximize some cumulative measure of the reward received over time. Almost all RL algorithms are based on estimating a so called value function, which is a function of the states estimating how good it is for an agent to be in a given state. For MDPs the state-value function, denoted as , is the expected return when starting in state and following policy thereafter. For more information the reader is refereed to [suton].

RL literature offers two approaches to solve MDPs. These two approaches are: model-based and model-free.

  1. Model-based. Dynamic Programming (DP) and Monte Carlo (MC) methods fall into the category of model-based approach.

    1. DP is able to solve MDPs by relying on the knowledge of the state transition probability between two states after executing a certain action. DP is an algorithmic paradigm that solves a given complex problem by breaking it into sub-problems and stores the results of sub-problems to avoid computing the same results again. DP algorithms are based on update rules derived from the Bellman equation. The first key component is known as the policy evaluation process, according to which a policy provides information about how much reward is going to be received in the MDP. This solution is used to build the first overall solution by finding the optimal policy known as the policy iteration process. Finally the value iteration makes the value function better and better by applying Bellman’s equation intuitively. DP is used to solve problems such as, scheduling, graph algorithms, bioinformatics, among others.

    2. MC method only requires experience, i.e., sample sequences of states, actions and rewards. The estimations are only updated after the episodes conclude. Although their application on practical cases is limited, they provide foundation for other RL methods.

  2. Model-free. Temporal Difference (TD) methods are model free approaches to solve RL problems. TD learning is a combination of MC and DP ideas. It uses the current estimate of the value function instead of the exact , as it happens in DP. If is known, we can solve the MDP through DP, otherwise we need to rely on TD methods.

    Some common examples of TD methods are: Q-learning, Sarsa and Actor Critic (AC) [suton]. TD methods can be found in each SON functionality.

    1. Q-learning and Sarsa are based on the estimation of the state-action value function, . Learning is performed by iteratively updating the Q-values, which represent the expert knowledge of the agent, and have to be stored in a representation mechanism. The most intuitive and common representation mechanism is the lookup table, i.e., the TD methods represent their Q-values in a Q-table, whose dimension depends on the size of the state and action sets. The difference between them is that, Q-learning is an off-policy learner. This means that, the agent will use the policy corresponding to the best action in the next state, given the current agent experience, whereas Sarsa is an on-policy learner. On-policy learners evaluate the policy , to perform the decisions. This means that, the policy followed by the agent to select its behaviour in a given state is the same used to select the action based on which it evaluates the followed behaviour.

    2. AC methods have a separate memory structure to represent the policy independently of the value function. The policy structure is known as the actor, since it is used to select the actions, while the estimated value function is known as the critic. The critic learns and critiques whatever policy is currently being followed by the actor and takes the form of a TD error , which is used to determine if was a good action or not. is a scalar signal, which is the output of the critic and drives the learning procedure. After each action selection, the critic evaluates the new state to determine whether things have gone better or worse than expected.

Iv Machine learning enabled Network Management

As we have mentioned in the introduction of this work, mobile networks constitute a huge source of data which could be analyzed with proper tools, with the primary goal to make more informed decisions when it comes to efficiently manage the overall 4G or 5G network. In this context, ML is a great opportunity due to its capability of providing insightful information from the analysis of data already available to operators, which can be used to make improvements or changes.

In this section we focus on how ML can specifically be applied to SON and novel network management concepts. First, we present all the relevant sources of information that could be extracted from a mobile network. All these data are available to operators, and may happen to be sensitive data for the users’ privacy. However, some interesting data can be derived from open databases or sniffed from unencrypted control channels like the PDCCH. We will then discuss on these options. Third, we will go through again the main SON and network management functions and provide a classification of the main inputs and outputs that we would need available in the form of data, when designing an appropriate ML algorithm to target the specific use case, and the KPI indicators that we would need to monitor. Finally, we provide an overview of SON and network management’s related work, where ML techniques have been adopted, classifying this work as a function of the targeted use case, the specific high level problem to solve and the ML technique that the authors have picked to address the problem.

Iv-a Data generated by mobile cellular networks

As we observed in [bDemp] a huge amount of data is currently already generated in mobile networks during normal operations by control and management functions. This kind of data can be exploited to find patterns and extract useful information from them. This allows to take more informed decisions to effectively manage network performance. Some examples of the different sources information generated by mobile networks, together with the kind of usage currently provided by operators, and related references of interest, is detailed in Table II.

  1. Charging Data Records (CDR). They are defined in  [TS32.298] and provide a comprehensive set of statistics at the service, bearer and IP Multimedia System (IMS) levels. These records are typically stored for offline processing by the operator. The granularity of this information in the time domain is however quite coarse, as records are generated in correspondence with high-level service events (e.g., start of a call).

  2. Performance management functionality. This data source  [TS32.401] [TS32.425] provides data regarding the network performance and it covers, among others, aspects of the performance of the radio access network, such as, radio resource control and utilization, performance of the various bearers (both on the radio part and in the back-haul), idle and connected mode mobility.

  3. Minimization of Drive Tests (MDT). The data extracted from this source refers to the radio measurements of both idle and connected mode mobility, coverage items, such as, power measurements and radio link failure events, and can be associated with position information of the UE performing the measurement. More information on these data has already been provided also in section II-F.

  4. E-UTRA Control plane protocols and interfaces, such as Radio Resource Control (RRC), S1-AP, X2-AP protocols, are another huge source of information, especially concerning aspects, such as cell coverage, user connectivity, mobility in idle and connected mode, inter-cell interference, resource management, load balancing, among others.

  5. Data plane traffic flow statistics, also are a huge source of information, which can be gathered at various points of the network, like the eNB, or the PDN Gateway (PGW) and Serving Gateway (SGW). The Internet Protocol Flow Information Export (IPFIX) is an example of standardized format to exchange this kind of statistics [IETFRFC7012].

{adjustbox}

width= Source Data Usage TS Charging Data Records (CDR) Includes statistics at the service, bearer and IP Multimedia Subsystem (IMS) levels. These records are typically stored, but only used by customer service. The network operation departments typically do not leverage this information and do not have access to it, as much as customer service does not leverage network management data. TS 32.298 [TS32.298] Performance management (data on network performance) It covers long-term network operation functionalities, such as Fault, Configuration, Accounting, Performance and Security management (FCAPS), as well as customer and terminal management. An example is that defined for Operations, Administration, and Management (OAM), which consists of aggregated statistics on network performance, such as number of active users, active bearers, successful/failed handover events, etc. per BS, as well as information gathered by means of active probing. The data is currently mostly used for fault identification, e.g., triggering alarms when some performance indicator passes some threshold, so that an engineer can investigate and fix the problem. Typically, the only automatic use of this info is threshold-based triggering, which can be done with very low computational complexity. TS32.401 [TS32.401], TS32.425 [TS32.425] Minimization of Drive Tests (MDT) Radio measurements for coverage, capacity, mobility optimization, QoS optimization/verification This data is used for identified use cases such as coverage, mobility and capacity optimization, and QoS verification TS37.320 [TS37.320] E-UTRA Control plane protocols and interfaces Control information related to regular short-term network operation, covering functionalities such as call/session set-up, release and maintenance, security, QoS, idle and connected mode mobility, and radio resource control. A This information is normally discarded after network operation purposes have been fulfilled. Some data can be gathered via tracing functionality or used by SON algorithms which normally discards the information after usage TS36.331 [TS36.331], TS36.413 [TS36.413], TS36.423 [TS36.423]

TABLE II: Information elements relevant for ML enabled SONs

All these data are available to the network operators, but in most cases this is not made available to the academic community due to privacy issues and network operators’ interests. There are some exceptions, like the Data for Development (D4D) initiative from the Orange group [D4D], which made available anonymous data extracted from the Senegal’s network to research laboratories. However these data are in general aggregated and do not allow deep insight into the operator’s network.

This lack of data represents a great limitation for the advancement of the ML based network management research. However, some network data can be derived in other ways. Some databases are available, providing a huge insight in mobile network operators. Some examples are listed in the following, together with information that can be extracted from them.

  • opencellid: It contains information on specific cells, such as: network type (GSM, UMTS, LTE), Mobile Country Code (MCC), Mobile Network Code (MNC), Location Area Code (LAC) for GSM and UMTS, Tracking Area Code (TAC) for LTE, Cell ID for (CID) for GSM and LTE networks, Primary Scrambling Code (PSC) for UMTS networks, Physical Cell ID (PCI) for LTE networks, longitude and latitude in degrees, estimates of range in meters, total number of measurements collected from the tower, defines if the coordinates of the cell tower are exact or estimated, information of the date when the cell tower was first added to the data base and updated, average signal strength from all measurements received from the cell in dBm, or as defined in [TS27.007]. This data base also receives funding from important vendors like Qualcomm [opencellID] and offers some formula of free access to portion of data for academic purposes.

  • opensignal: It offers information on achievable data rates, latency and availability, per operators, but not information per cell tower [opensignal].

  • antenasgsm: It offers information on maps and positions of cells, with added information on the operator and the assigned bandwidth [antenasgsm].

  • Google geolocations API: It allows queries based on the cell ID to get cell related information and WiFi Access Points (AP), such as latitude and longitude [googlegeo].

The information provided by these databases is precious, but still does not give sufficient insight on the behaviour of the network, and mainly offers an overview of the coverage provided by the single operators. To get more information, still we can do something more and access directly to the unencrypted PDCCH and extract information exchanged between the users and the associated eNB. In particular, it is possible to build a sniffer, as the one described in [BuiOWL], from which to collect raw communication traces exchanged by the users and the associated eNodeB. This allows to have access not only to aggregate base station statistics, but also to more valuable information derived from the radio protocols, such as the resource block allocation and the link adaptation mechanism of the system. In particular, the OWL sniffer [BuiOWL] is an online decoder of the LTE control channel, which uses a Software Defined Radio (SDR) to send the raw LTE signal to a PC running the decoding software. This open-source software is capable of reliably logging the LTE downlink control information (DCI) broadcasted by base stations. In fact, LTE uses an unencrypted control channel to assign network resources to users for both downlink and uplink communications. Resources are assigned to devices through their radio network temporary identifiers (RNTIs), every millisecond, specifying the number of resource blocks (RBs) and the modulation and coding scheme (MCS) to be used. There are works in literature using this sniffer to collect and analyze traces from different European cities [Trinh17].

Finally, let us review the main SON use cases in Table III, by analyzing the main input information that their design would require, in terms of data, together with the main identified output actions and meaningful associated KPIs.

{adjustbox}

width= SON function Inputs Output actions KPIs Mobility Load Balancing (MLB) X2 resource status and load estimation information. Tuning the CIO, i.e. offsets of serving and neighbour cells to satisfy handover conditions. Improved QoS and capacity Mobility RobustnessHandover Optimisation (MRO) S1AP and X2AP handover requests, handover reports, RLF reports and indications. A3 offsets, TTT, L1 and L3 filter coefficients, in connected mode, and Qoffset in Idle mode. Minimized call drops, RLFs and ping pong effects. Coverage and Capacity Optimization (CCO) UE measurements Transmission power, pilot power, antenna parameters, coordinated Almost Blank Subframes (ABS) Maximized coverage and cell and edge throughput Inter-Cell Interference Coordination (ICIC) HII, RNTP, OI, UE Measurements. Transmission power, pilot power, antenna parameters, coordinated ABS Minimized Intercell interference. Cell Outage Compensation (COC) UE Measurements. Transmission power, antenna parameters of neighbouring cells Minimized outage. Energy Saving (ES) Resource status, UE Measurements. Switch ON and OFF policies Minimized energy consumption.

TABLE III: SON inputs, outputs and KPIs

Iv-B Overview of ML based Network management’s relevant literature

This section reviews SON and Network management’s recent work in the area of ML. We will go through each main function and use case and review significant literature and ML approach that has been used to approach the problem. Table IV summarizes the main works in this area and classifies them per 3GPP use case, technique and specific algorithm adopted by the authors.

  1. Use case: Indicates the 3GPP targeted use case.

  2. Reference: Indicates the reference of the related work.

  3. Technique: Indicates the applied ML method (Supervised Learning, Unsupervised Learning, Reinforcement Learning).

  4. Problem: Indicates the general problem to solve.

  5. Algorithms: Indicates the specific ML algorithm applied to the data (see TableIV).

{adjustbox}

width= Reference ML technique Problem Algorithm Self-configuration PCI [Peng13] UL Planning Clustering Self-optimization MLB [stephenMLB] RL Control optimization Q-learning [munoz2] RL Control optimization Q-learning [mlb] RL Control optimization Fuzzy Q-learning [junishi] RL Control optimization Dynamic Programming [emil] UL Grouping K-means clustering [Franco15] SL Prediction Multivariate polynomial regression MRO [qin] RL Control optimization Q-learning [stephenMLB2] RL Control optimization Q-learning [HOpablo] RL Control optimization Fuzzy control [mro, mrosinclair] UL Pattern identification SOM [Farooq17] UL Prediction Semi-Markov model [Ali16, Ostlin04, Quintero04, Majumdar05] SL Prediction ANN CCO [RAZAVI] RL Control optimization Fuzzy Q-learning [naseer] RL Control optimization Fuzzy Q-learning [jingyu] RL Control optimization Fuzzy Q-Learning [ccoEURASIP] UL, RL Control optimization Fuzzy ANN/Q-learning ICIC [Galindo] RL Control optimization Q-learning [dirani] RL Control optimization Fuzzy Q-learning [blascoICIC, simsek] RL Control optimization Q-learning ES [miozzo] RL Control optimization Q-learning [annaCAMAD] UL Decision making Fuzzy logic [es1, es2] UL Grouping, pattern identification Clustering Self-healing COC [jmoysenSH] RL Control optimization Actor Critic [onireti] RL Control optimization Actor-Critic [ARSALAN] SL Control optimization Fuzzy logic COD [fedor] UL Anomaly detection Diffusion Maps [khabib] SL Anomaly detection Fuzzy logic [RANA] SL/UL Diagnosis Naive Bayesian [emcoc] SL Anomaly detection SVM, Ensemble methods [onireti] SL/UL Anomaly detection -NN, local-outlier-factor [Alias16] UL Grouping, pattern identification Hidden Markov Model [Xue14, Zoha14, Chernov14] SL Fault Detection -NN [Barco05, Khanafer08] SL Diagnosis Naive Bayesian Self-coordination [hafiz] SL Classification Decision Trees [Berna_coord] RL Control optimization Actor Critic [lbHo] RL Control optimization Q-learning [jmoysenEurasip] RL Control optimization Actor Critic Minimization Drive Tests [chernogorov12, chernogorov13] SL Verification/estimation Linear correlation [jmoysenCAMAD] SL Prediction Regression models [jmoysenISCC] SL/UL Prediction/curse of dimensionality Regression models/Dimensionality reduction [jmoysenPIMRC, jmoysenHindawi] SL Prediction Bagged-SVM/Dimensionality reduction Core Networks [balint] SL Prediction Adaboost, SVM

TABLE IV: Related work

Iv-B1 Mobility Load Balancing

The literature offers some examples of application of ML techniques to the MLB use case. The majority of applications fall in the area of RL, as the main problem to solve is a sequential decision problem about how to set configuration parameters, which optimize network performance and user experience. An example of a RL application for MLB use case can be found in [stephenMLB]. Here the authors present a distributed Q-learning approach that learns for each load state the best MLB action to take, while also minimizing the degradation in HO metrics. Another option to take advantage also of fuzzy logic capabilities of dealing with heterogeneous sources of information is provided in [munoz2], where fuzzy logic is combined with Q-learning in order to target the load balancing problem. For similar reasons, fuzzy logic is also proposed in [mlb] to enhance the network performance by tuning HO parameters at the adjacent cells. Approaches incorporating fuzzy logic with RL capabilities have the advantage to capture the uncertainty existing in real world complex scenarios, while schemes considering only learning approaches may be limited by the fixed variable definition. When combining fuzzy logic with RL, also the subjectivity with which the fuzzy variable may be defined is overcome by the adjusting capabilities of the learning. Alternatively, a centralized solution is approached in [junishi], where a central server in the cellular network determines all HO margins among cells by means of a dynamic programming approach. Besides RL, also clustering schemes have been proposed in this area, to group cells with similar characteristics and provide for them similar configuration parameters [emil]. Considering clustering in large realistic scenarios is an added value to reduce computational complexity and take advantage of what is learnt in other regions of the network where we observe similar environment characteristics.

Iv-B2 Mobility Robustness Optimization

Also for the case of MRO, we find in literature different solutions based on RL to solve a control decision problem. In [qin, stephenMLB2], the authors focus on the optimization of the users’ experience and of the HO performance. In [qin] the authors take advantage of the Q-learning approach to effectively reduce the call drop rates, whereas in [stephenMLB2], unlike other solutions that assume a general constant mobility, the authors adjust the HO settings in response to the mobility changes in the network by means of a distributive cooperative Q-learning. Differently from [qin, stephenMLB2], in [HOpablo], the authors take advantage also of fuzzy logic capabilities. These solutions are based on control optimization of HO parameters through RL, so they propose similar solutions to those found in the literature of MLB. In this case we can do the same considerations about the advantages of considering fuzzy logic in order to gain in flexibility in the uncertain and complex real network context. Different approaches in turn, address the problem by identifying successful HO events, through solutions based on unsupervised learning. In particular, the works of [mro] and [mrosinclair] propose an approach to HO management based on UL and SOM analysis. The idea is to exploit the experience gained from the analysis of data of the network based on the angle of arrival and the received signal strength of the user, to learn specific locations where HOs have occurred and decide whether to allow or forbid certain handovers to enhance the network performance. The solutions enable self-tuning of HO parameters to learn optimal parameters’ adaptation policies. Similarly, in [Farooq17] the authors exploit the huge amount of information generated in the network to predict user traffic distribution. In particular, they take advantage of semi-Markov model for spatiotemporal mobility prediction in cellular networks. Finally, the works in [Ali16, Ostlin04, Quintero04, Majumdar05], propose schemes to make predictions about UE’s mobility, which allows to anticipate smart HO decisions.

Iv-B3 Coverage and Capacity Optimization

In case of CCO, different approaches in literature focus on RL solutions based on continuous interactions with the environment, oriented to online adjusting antenna tilts and transmission power levels through TD learning approaches. In [RAZAVI] and [naseer] a fuzzy Q-learning approach to optimize the complex wireless network, by learning the optimal antenna tilt control policy has been proposed, and a similar approach is followed also in [jingyu] and [ccoEURASIP]. In addition, they also propose to combine fuzzy logic with Q-learning, in order to deal with continuous input and output variables. [jingyu] also proposes a central control mechanism, which is responsible to initiate and terminate the learning optimization process of every learning agent deployed in each eNB. Finally, [ccoEURASIP] innovates with respect to other approaches since in order to adjust the antenna tilt and transmission power parameters, it considers the load distribution of the different cells involved in the optimization process, and introduces novel mechanisms to facilitate cooperative learning among the different SON entities.

Iv-B4 Inter-cell Interference Coordination

Similarly to the CCO case, ML has been proposed in the literature of ICIC use case as a valid solution, where RL is the principle used tool, with special emphasis to TD methods, in order to target the optimization of control parameters. Several works target the problem to minimize the interference among cells by using the most common TD learning method, Q-learning [Galindo, dirani, blascoICIC, simsek]. The work in [Galindo] is related to control inter-cell interference in a heterogeneous femto-macro network. The work combines information handled by the multi-user scheduling with decisions taken by a learning agent based on Q-learning, which tries to control the cross-tier interference per resource block. [dirani] proposes a distributed solution for ICIC in OFDMA networks based on a Fuzzy Q-learning implementation. The proposed solution achieves joint improvement for all users, i.e., the improvements of users with bad quality does not come at the expense of users with good quality. Moreover, a decentralised Q-learning framework for interference management in small cells is proposed in [blascoICIC]. The authors focus on a use case in which the small cell networks aim to mitigate the interference caused to the macro-cell network, while maximizing their own spectral efficiencies. Finally, in [simsek] also a decentralized Q-learning approach for interference management is presented. The goal is to improve the systems performance of a macro-cellular network overlaid by femto-cells. In order to improve the time of convergence, a mitigation approach has been introduced, allowing them to have significant gains in terms of throughput for both, macro and femto users. Interesting trade-offs can be studied to compare centralized vs. distributed solutions. In the novel context of small cells distributed solutions to interference management are to be preferred over more complex centralized solutions, but convergence and instability approaches may appear to affect the TD learning schemes, compromising system performances [Galindo].

Iv-B5 Energy Savings

Energy savings schemes for wireless cellular systems have been proposed in the past, enabling cells to go into a sleep mode, in which they consume a reduced amount of energy. In order to reduce the energy consumption of the eNBs, we can found several works related to ML techniques. An example of that can be found in [miozzo], where the authors take advantage of RL to propose a decentralized Q-learning approach to allow energy savings by learning a policy by the iterations with the environment taking into account different aspects over time, such as the daily solar irradiation. Also, in [annaCAMAD], the authors switch off some underutilized cells during off peak hours. The proposed approach optimizes the number of base stations in dense LTE pico cell deployments in order to maximize the energy saving. For the purpose, they use a combination of Fuzzy Logic, Grey Relational Analysis and Analytic Hierarchy Process tools to trigger the switch off actions, and jointly consider multiple decision inputs for each cell. This last work uses smart decision theory approaches, which though are not able to take advantage of the previous decisions made in the same environment, as in turn does the work proposed in [miozzo], as a result of the TD learning approach. This allows that the work in [miozzo] offers a more solid solution, considering also past information in the decision. Also for HetNets, we find several works, such as, [es1, es2], where the authors take advantage of KPIss available in the network for the construction of different kind of databases to analyse the potential gains that can be achieved in clustered small cell deployments.

Iv-B6 Cell Outage Compensation

The literature already offers different works targeting the problem of COC. For this use case RL has been proven as a valid solution since it is a continuous decision making/control problem. In this context a contribution in the area of self-healing has been presented in [jmoysenSH, onireti], where the authors present a complete solution for the automatic mitigation of the degradation effect of the outage by appropriately adjusting suitable radio parameters of the surrounding cells. The solution consists of optimizing the coverage and capacity of the identified outage zone, by adjusting the gain of the antenna due to the electrical tilt and the downlink transmission power of the surrounding eNBs. To implement this approach, the authors propose a RL based on actor-critic theory to take advantage of its capability of making online decisions at each eNB, and of providing decisions adapting to the evolution of the scenario in terms of mobility of users, shadowing, etc., and of the decisions made by the surrounding nodes to solve the same problem. A COC contribution also based on ML is targeted in [ARSALAN], where fuzzy logic is proposed as the driving techniques to fill a coverage gap. The authors show performance gains by using different parameters, such as, the power transmission, the antenna tilt, and a combination of the two schemes. These two works are compared in [jmoysenSH] and the work in [jmoysenSH] is proven superior thanks to the ability to learn from the past experience introduced by the RL actor-critic approach.

Iv-B7 Cell Outage Detection

As we already mentioned, COD aims to autonomously detect cells that are not operating properly due to possible failures. For this kind of problem, anomaly detection algorithms offer an interesting solution that allows to identify outliers measurements, which can be highlighting a hidden problem in the network. Proposals of solutions for this problem can be found in [fedor] and [khabib]. In particular, [fedor] presents a solution based on diffusion maps, by means of clustering schemes, capable of detecting anomalous behaviours generated by a sleeping cell. [khabib] presents a solution based on fuzzy logic for the automatic diagnosis of a troubleshooting system. In order to determine if there is a failure, the authors propose a controller, which receives as an inputs a set of representative KPIs. A similar approach is presented by [RANA], where the authors present an automated diagnosis model for Universal Mobile Telecommunications System (UMTS) networks based on Naive Bayesian classifier, and where the model uses both network simulator and real UMTS network measurements. In the context of this king of classifiers , the works in [Barco05, Khanafer08], also take advantage of NB for automated diagnosis based on different inputs network performances. The work in [emcoc] addresses both the case of outage and the one where in turn the cell can provide a certain level of service, which though does not allow to fulfil the expected UEs requirements. The approach relies on ensemble methods to train KPIs extracted by human operators to make informed decisions. In [turkka3], the authors consider large data sets to identify anomaly behaving base station. They proposed an algorithm consisting of preprocessing, detection and analysis phases. The results show that by using dimensionality reduction and anomaly detection techniques irregularly behaving base stations can be detected in a self-organized manner. In [onireti] data gathered through MDT reports is used for anomaly detection purposes. Furthermore, the works of [Xue14, Zoha14, Chernov14] take advantage of -NN algorithm to propose a self-healing solution, in particular to tackle the fault detection domain. Finally, in [Alias16], the authors consider a HetNet and they take advantage of HMM to automatically capture the dynamic’s of four different states and probabilistically estimate if there exist a possible failure.

Iv-B8 SON Conflicts Coordination

As the deployment of stand-alone SON functions is increasing, the number of conflicts and dependencies between them also increases. Hence, an entity has been proposed for the coordination of this kind of conflicts. In this context, current literature includes several works based on ML. In [hafiz] the authors focus on the classification of potential SON conflicts and on discussing the valid tools and procedures to implement a solid self-coordination framework. Q-learning, as a RL method, has been proposed in [Berna_coord] to take advantage of experience gained in past decisions, in order to reduce the uncertainty associated with the impact of the SON coordinator decisions when picking an action over another to resolve conflicts. In [lbHo], the authors use Q-learning to deal with the conflict resolution between two SON instances. Decision trees have been proposed in [policy] to properly adjust Remote Electrical Tilt (RET) and transmission power. Additionally, in [jmoysenEurasip] the authors provide a functional architecture that can be used to deal with the conflicts generated by the concurrent execution of multiple SON functions. They show that the proposed approach is general enough to model all the SON functions and their derived conflicts. First they introduce these SON functions in the context of the general SON architecture, together with high-level examples of how they may interfere. Second, they define the state and action spaces of the global MDP that models the self-optimization procedure of the overall RAN segment. Finally, they show that the global self-optimization problem can be decomposed onto as many Markov decision sub-processs as SON functions.

Iv-B9 Minimization of Drive Tests

The great majority of literature using the MDT functionality to target MDT use cases, takes advantage of supervised and unsupervised learning techniques to provide different solutions for the different use cases. An example of that can be observed in [chernogorov12, chernogorov13], where the authors address the QoS estimation by selecting different KPIs and correlating them with common nodes measurements, to establish whether a UE is satisfied with the received QoS. A similar objective is targeted in [jmoysenCAMAD], however, differently from the previous works, here the authors focus on multi layer heterogeneous networks, so in a more complex and realistic scenario than the traditional macrocell one. In particular, they present an approach, based on regression models, which allows to predict QoS in heterogeneous networks for UEs, independently of the physical location of the UE. This work is extended in [jmoysenISCC] by taking into account the most promising regression models, but also analysing dimensional reduction techniques. By doing PCA/SPCA on the input features, and promoting solutions in which only a small number of input features capture most of the variance, the number of random variables under consideration is reduced. Based on previous results, in [jmoysenPIMRC, jmoysenHindawi] the same authors define a methodology to build a tool for smart and efficient network planning, based on QoS prediction derived by proper data analysis of UE measurements in the network.

Moreover, the work in [matiasCCO] presents a system based on a fuzzy logic controller to improve network performances by adjusting antenna tilts values in a LTE system. Differently from previous works, the authors consider the use of call traces to identify the level of coverage, overshooting and overlapping problems, which are the inputs to the algorithm. Also, in [matias], the same authors take advantage of connection traces (signal strength, traffic, and resource utilization measurements) to improve the network infrastructure in terms of spectral efficiency. The proposed method is designed to be integrated in commercial network planning tool. Finally, in [AnaMariaMDT] the authors take advantage of the MDT measurements to build a Radio Environment Map (REM) by applying spatial interpolation techniques (Bayesian kriging). The REM (Radio Environmental Map) is then used to detect coverage holes and predict the shape of those areas.

Iv-B10 Core Networks

As we already mentioned in section II, the operational aspects of core networks elements can be enhanced through, for example, the automatic configuration of the neighbour cell relations function. In this regard, the idea of applying ML to this function is not new. In [balint] the authors study the benefits of using ML to root-cause analysis of session drops, as well as drop prediction for individual sessions. They present an offline Adaboost and SVM method to create a predictor, which is in charge of eliminating/mitigating the session drops by using real LTE data.

Iv-B11 Virtualized and Software Define Networks

Also when we go beyond the RAN and we focus on the network in general, ML concepts have already been proposed in different works to build cognitive based techniques to operate the network. An example of these proposals is well summarized by [Clark13]. In this work, a Knowledge Plane is advocated, which would bring many advantages to the networks in terms how the network is operated, automated, optimized and troubleshooted. Conceptually this vision is aligned with different others proposals in other areas, such as the black-box optimization [Rios2013], the autonomic self-x architectures [Derbel2009], or the work presented in [Zorzi16]. In this context, the work in [Mestres16] analyzes the reasons why the vision proposed in [Clark13] has still not been brought to reality, and the main reason that they find is in the challenges that appear when it comes to autonomously manage a network in a distributed fashion. In particular, the work argues that the emerging trend of centralization in control brought by the novel SDN vision, will significantly reduce this complexity and favour the realization of the ML vision in the network. As a result, in [Mestres16] some initial experimental results based on the vision defined in [Clark13] are brought into reality in the context of a SDN based architecture. Further work in this area is carried put in the context of different European H2020 projects [COGNET]. The work in [Yahia17] presents a novel cognitive management architecture that manages multiple use cases, like the Service Level Agreement (SLA) and the Mobility Quality Predictor. Both use cases are tackled using machine learning approaches, the Long Short Term Memory, and a per user bandwidth predictor. The work in [Bendriss17] implements SLA through ML approaches. It uses an ANN for evaluation of cognitive SLA enforcement of networking services involving Virtualized Network Functions and SDN controllers.

V Challenges for future works

In this section, we focus on some open challenges that still need to be addressed when it comes to making ML based network management a reality.

V-a Real data

It is possible to find databases related to signals and coverage data [opensignal, D4D], by using/designing applications that collect information such as Reference Symbol Received Power (RSRP) and Reference Symbol Received Quality (RSRQ). However, it is not easy to find contributions analysing real network management data. Some work can be found in the context of 3G networks, but currently, in the context of 4G networks it is very hard to find works considering real data [european, laner]. These works though do not take into account the data analysing them through ML techniques, to extract experience from them. We consider that it is extremely important for this research line to get to the next level, to get access to operator’s network data. An alternative to real data, could be to sniff data from unencrypted LTE control channels, as we have shown in [Trinh17] or to use a high-fidelity network simulator ns-3 LTE/ LTE-EPC Network Simulator (LENA) module, to generate realistic data [lena]. This simulator has been built around industrial Application Programming Interface (API) defined by the small cell forum and offers high-fidelity models from Media Access Control (MAC) to application layers. It has also been designed with the requirement to simulate tens of eNBs and hundreds of UEs, and to specifically test Radio Resource Management (RRM) and SON algorithms. Consequently it could be a very useful tool to build realistic scenarios based on information available in public databases, generate data to analyse, build algorithms based on this analysis and close the loop on the simulator to test the designed algorithms. In this context, it is also hard to find contributions where ML approaches are used not only in network simulators, but in real networking products. In general, it seems vendors are reluctant to test algorithms whose behaviour is not predictable. An important research line is then how to find or generate meaningful network data, and find patterns in them to understand aspects that should be optimized in the network.

This research line, additionally, faces important privacy and confidentiality issues. It is important to ensure that the data that is used is properly anonymized. As mentioned in section IV, data come from different sources of the network, but can also be offered by third parties, e.g., data generated by the user, open data, sensor data, among others. Therefore, to come up with a unified privacy policy is extremely challenging at security and privacy levels, due to the variety and the granularity of the data. If we add to this the speed at which data are created and need to be analysed, the security challenges are huge. In this context, big data is changing the security analytics, where robust and scalable privacy preserving mining algorithms are critical to ensure that the most sensitive private data is secure. As a result, privacy-preserving data mining is a challenging research line that has to be investigated. In particular, in order to guarantee privacy protection, it is important to define the privacy requirements taking into account the lifecycle of the analytics. For example, in the data collection phase, it is important to identify the personal data needed for processing. The idea is to extract only the needed data for the specific purpose. Aggregated information can also be used instead of personal data. In this context, one of the most relevant techniques is anonymization, which is the process of modifying personal data in such a way that no identification is possible. Regarding the data analysis phase, different privacy models are available in the context of big data analytics, where two of the most important families are: K-anonymity and differential privacy. A review in more detail of the aforementioned methods can be found in [enisa]. Moreover, in order to protect personal data in databases (data sotrage phase), encryption is a fundamental security technique, which transforms data in a way that only authorized parties can read it. For more information, the reader is referred to [enisa, Khairulliza16].

V-B Big Data and Deep Learning

Deep learning is a new trend in ML that allows computer systems to improve with experience and data. It achieves great power and flexibility to operate in complicated real-world environments, by learning to represent the world through a nested hierarchy of concepts. The ML algorithms that we have reviewed in this paper have a strong dependency on the features on which the algorithms are applied. Based on that, much effort has been devoted to design ML algorithms that yield to useful representations. This is known as representation learning, and deep learning is one way of learning representations [Bengio13, DL2016]. The main representation of deep learning is through a Multilayer Perceptron, which is a multilayer neural network function mapping some sets of input values to output values. Each layer of this representation learns a hierarchy of the output. Deep Learning has the ability to do successful training from the bottom layers to the higher ones. This is done by applying computational models that are composed of multiple levels of representation and abstraction that help make sense of data.

Historically, DL has become more useful as the amount of training data has started increasing (big data). Also, the research on deep learning has benefited from the increase of computer infrastructure at both hardware and software levels. All this has made that deep learning has solved increasingly complicated applications with increasing time and accuracy. The potential of these improved techniques in the area of NM, in case the big data associated to the management of the complex 4G, 5G network ecosystem is available, is still to be evaluated and open for research.

V-C Theoretical research

With respect to online control decision problems that allow to continuously take RRM/SON decisions, we are aware of some approaches, which take advantage of reinforcement learning to solve this problem. The current approach is to use single agent algorithms and extend them to multi-agent settings. However, this kind of algorithms require a considerable amount of time before finding a solution, and it increases with the state and action spaces. So, reinforcement learning approaches dealing with this issues have to be investigated. Moreover, no proof of convergence is available demonstrating that this approach actually reaches meaningful conclusions. Even though ML literature offers different algorithms that can find interesting solutions (e.g. NashQ [nashQ]), the space of possible solutions is so big that this kind of approaches is not feasible to be used in a realistic network where the time constraints of RRM/SON problem have to be met. So, more research in the area of multi-agent systems, which are also compatible with real network requirements need to be investigated.

In the context of data analytics, it is well known that the analysis of the data requires a substantial amount of ”black art”, and consequently it requires the availability in the research groups of multi-disciplinary researcher profiles knowledgeable of information technology, computer science, and telecom engineering, to properly optimize the network accordingly. In this context, ML trends, like deep learning can be very useful, however little work can be found applying these new promising techniques to network management [zorzi].

V-D Network management of multi-technologies networks and of future New Radio

Autonomous network management of multi-technology networks, where heterogeneous networks including different Radio Access Technologies (RAT), or different layers of the network are coexisting, e.g., Wi-Fi, mmWave, mobile network layer, transport layer, among others, is still immature. However, these scenarios will tend to emerge with the advent of the unlicensed spectrum paradigm and with technologies such as LAA or New Radio (NR), the new radio access technology for 5G, which is currently under definition in 3GPP. NR, in particular, will be defined to work over a wide range of spectrum opportunities, ranging from sub 6 GHz and up to mmWave spectrum, and under multiple spectrum paradigms, such as licensed, unlicensed and shared. The opportunities of autonomous network management in this area are huge. ML has still not been exploited to handle these networks with intelligence and self-awareness. In particular, the management of densified and heterogeneous, in both technologies and layers, architectures, requires the evolution of complex SON concepts, which have traditionally been designed and standardized for LTE based networks. Also, self-organization in the context of NR technology is still to be completely defined. Before reaching this vision, multiple challenges need to be addressed, e.g. the self-coordination problem and the solution of conflicts among SON functions executed in different nodes, or networks, which put the network at risk of instability, or the most appropriate location of SON functions and algorithms, to solve properly the distributed vs. centralized SON implementation issue. Many aspects have to be considered when locating and designing a SON function, e.g. response time, complexity, size of databases, computational capability of nodes, etc. Centralized (i.e. a large number of cells is involved), distributed (approx. 2 cells are involved, coordinating through X2) and local (only one cell is involved) implementations of SON functions have been proposed. No architecture can be claimed superior to the other. The growing complexity, dynamicity, and heterogeneity of 5G networks will substantially increase the number of scenarios to solve. So, there is the need for exploiting their complementarity by virtualizing and dynamically deploying them.

V-E Network management of novel softwarized and virtualized architectures

To benefit of all the opportunities offered by centralized, distributed and local implementations, and towards the need of virtualizing resources in order to reduce network costs, while meeting the stringent new service verticals’ requirements, there is the need to further study autonomous NFV and SDN architecture, where end-to-end SON functions, aimed at tackling the main radio access and backhauling challenges of extremely dense deployments, are virtualized and run over generic purpose hardware. This infrastructure is to be managed by an orchestrator entity (in coordination with the corresponding virtual network function and virtual infrastructure managers), as proposed in European Telecommunications Standards Institute (ETSI) architecture. This orchestrator or SDN controller is the brain of the network and needs the ability to adapt to ever-changing conditions. The network should not only react to failures, but adapt to the demand, predict it, based on data analytics, and facilitate in this way the task of the network management. Research on deep reinforcement learning implementations of the orchestrator will allow the controller to self-learn after every decision. Automation will require also all the advancements of the Information Technology sector, with increased computational capacity, more CPUs and memory space. However, future orchestrators will need to handle a huge amount of data and learn from them through novel deep learning approaches, the smart network management decisions. This research line is still highly immature and requires a lot of efforts.

Vi Conclusions

In this work we have motivated the need for ML to be considered as a crucial and inevitable tool in order to address automation, self-awareness and self-organization in current and future mobile networks. The SON features have been considered fundamental in LTE definition and have been introduced in this technology since its very beginning in Release 8. We believe that this need of automation will be further enhanced with the expected complexity that future 5G network management will have to handle. On top of that, we have shown that current cellular networks, already generate a huge amount of data that if properly stored and managed could bring new insights in how the networks work and offer new challenges for improving network management taking into account the experience that can be gained from these data. We have reviewed the main taxonomy of machine learning and the novel trends that could make this exploitation of data to gain insight of the network a reality. Also, we have discussed open data options, as much as alternatives to get data from the networks, which otherwise are not made available to the academic community. With this motivations in mind, we have started by reviewing the main concepts and taxonomy of SON, network management and ML, and we have reviewed significant academic literature in the area of network management, focusing only on solutions based on ML. The work has reviewed the status of this exciting research line, while at the same time highlighting open challenges that we need to deal with in order to make future autonomous network management a reality.

Vii Competing Interests

The authors declare that there is no conflict of interest regarding the publication of this paper.

Viii Acknowledgment

The research leading to these results has received funding from the Spanish Ministry of Economy and Competitiveness under grant TEC2014-60491-R (Project 5GNORM). This work also was supported by the Spanish National Science Council and ERFD funds under Project TEC2014-60258-C2-2-R.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
70191
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description