SDN-Based Resource Allocation in MPLS Networks: A Hybrid Approach

SDN-Based Resource Allocation in MPLS Networks: A Hybrid Approach

Mohammad Mahdi Tajiki    Behzad Akbari    Nader Mokari    Luca Chiaraviglio\IEEEcompsocitemizethanks\IEEEcompsocthanksitemLuca Chiarviglio is with the ECE Department, University of Rome Tor Vergata, Italy, E-mail: luca.chiaraviglio@uniroma2.it\IEEEcompsocthanksitemMM. Tajiki, B. Akbari, N. Mokari are with the ECE Department, University of Tarbiat Modares, Tehran, Iran - E-mail: {mahdi.tajiki,b.akbari,nader.mokari}@modares.ac.ir
Abstract

The highly dynamic nature of the current network traffics, makes the network managers to exploit the flexibility of the state-of-the-art paradigm called SDN. In this way, there has been an increasing interest in hybrid networks of SDN-MPLS. In this paper, a new traffic engineering architecture for SDN-MPLS network is proposed. To this end, OpenFlow-enabled switches are applied over the edge of the network to improve flow-level management flexibility while MPLS routers are considered as the core of the network to make the scheme applicable for existing MPLS networks. The proposed scheme re-assigns flows to the Label-Switched Paths (LSPs) to highly utilize the network resources. In the cases that the flow-level re-routing is insufficient, the proposed scheme recomputes and re-creates the undergoing LSPs. To this end, we mathematically formulate two optimization problems: i) flow re-routing, and, ii) LSP re-creation and propose a heuristic algorithm to improve the performance of the scheme. Our experimental results show the efficiency of the proposed hybrid SDN-MPLS architecture in traffic engineering superiors traditionally deployed MPLS networks.

{IEEEkeywords}

SDN, MPLS, Software Defined WAN, OpenFlow, PCE/PCEP, Hybrid Networks. \IEEEpeerreviewmaketitle

1 Introduction

\IEEEPARstart

Service providers around the world have large investments in highly sophisticated and feature-rich MPLS network infrastructures for providing services to their customers. These infrastructures are built on traditional network equipment (combined data plane and control plane) which are costly to scale, complex to manage, and time consuming to reconfigure. Network Function Virtualization (NFV), cloud computing and the proliferation of connected devices are leading to exponentially increasing traffic and significant fluctuations in usage patterns. These reasons make network operators to move to agile architectures which support dynamic reconfiguration of both services and the network infrastructures [1]. For Service Providers, these capabilities provide new revenues, reduce time to market, increase new service uptake, and enhance their ability to meaningfully differentiate their offerings[2].

The state-of-the-art paradigm called SDN [3] along with the OpenFlow protocol [4] provides lots of new traffic management features[5]. This makes it a proper and highly adopted technology for data center networks. One of the most important benefits of employing OpenFlow is its ability to route/re-route the traffic flows based on the network traffic pattern. In other words, it optimally routes/re-routes the traffic in flow level granularity. Therefore, there are lots of novel works which focus on traffic engineering in pure OpenFlow networks[6, 7, 8, 9, 10]. However, migration from carrier networks which are mostly MPLS-based to OF-based network is challenging and highly expensive.

To circumvent the aforementioned challenges, we propose a novel traffic engineering architecture in which the integration of OpenFlow and traditional MPLS is adopted. This traffic engineering architecture is motivated by scenarios where SDN is going to be deployed in an existing network. In such a network, some parts of the traffic is controlled by the SDN controller; some other parts of the network use existing network routing protocol. In other words, we consider traffic engineering in the case where a SDN controller controls only a few SDN forwarding elements in the network and the rest of the network does hop-by-hop routing using MPLS protocol. The objective is to propose a traffic engineering algorithm for integration of MPLS and OpenFlow networks that can adaptively and dynamically manage traffic in a network to accommodate different traffic patterns. To this end, the network traffic is monitored to achieve the current traffic matrix. Thereafter, based on the current traffic matrix and the knowledge base of previous demands, the controller computes LSPs and assign the flows to each LSP (at the edge layer: OpenFlow-enabled switches). Our main contributions are as follows:

  • A new traffic engineering architecture for MPLS-OpenFlow hybrid networks is proposed.

  • We mathematically formulate two optimization problems: a) the problem of LSPs re-configuration in MPLS networks when there is a central controller as the PCE element, and b) the problem of flow-level resource re-allocation.

  • In order to improve the performance of the solution, a heuristic algorithm for the problem of flow-level resource re-allocation is proposed.

The remainder of the paper is organized as follows: In Section 2, the related work is discussed. Section 3 states the definition of the problem, the proposed architecture, and an outline of the proposed schemes. Section 4 discusses the system model, parameters, objective function and constraints. The proposed heuristic algorithm is described in Section 5. The performance analysis of the proposed schemes are presented in Section 6. Finally, Section 7 concludes the paper and presents future directions.

2 Related Works

In the following, we explain the state-of-art algorithms which are related to hybrid networks. To this end, we categorize the subject into three sub-topics: i) hybrid approaches that allows the coexistence of traditional IP routing and SDN based forwarding within the same provider domain, ii) hybrid approaches that focus on combination of traffic engineering and power management in hybrid networks, and iii) incremental deployment of hybrid networks.

2.1 IP routing and SDN based forwarding within the same provider domain

Salsano et al. [11], propose a hybrid approach that allows the coexistence of traditional IP routing with SDN based forwarding within the same provider domain. To this end, they design a hybrid IP/SDN architecture called Open Source Hybrid IP/SDN (OSHI). Besides, they implement a hybrid IP/SDN node made of Open Source components. The aim of [12] is to present some architecture to enable interoperability in transport networks. They present alternatives to control plane interoperability. Moreover, they justify why SDN can be a solution to enable multi-vendor scenario and multi-domain path establishment in current networks. In [13], an application-based network operations (ABNO) architecture is proposed as a framework that enables network automation and programmability. ABNO not only justifies the architecture but also presents an experimental demonstration for a multi-layer and multi-technology scenario.

Sgambelluri et al. [14], present two segment routing (SR) implementations for MPLS and SDN-based networks, separately. They have two different network testbeds. The first implementation focuses on a SDN scenario where nodes consist of OpenFlow switches and the SR Controller is an enhanced version of an OpenFlow Controller. The second implementation includes a Path Computation Element (PCE) scenario where nodes consist of MPLS routers and the SR Controller is a new extended version of a PCE solution.

Das et al. [15], propose an approach to MPLS that uses the standard MPLS data plane and an OpenFlow based control plane. They demonstrate this approach using a prototype system for MPLS Traffic Engineering. Additionally, they discuss deficiencies of the MPLS control plane focusing on MPLS-TE and suggest how a few new control applications on the network OS can be used to replace all MPLS control plane functionalities like distributed signaling and routing. In [16], Hui et al. describe their experience in the design of HybNET which is a framework for automated network management of hybrid network infrastructure (both SDN and legacy network infrastructure). They discuss some of the challenges they encountered, and provide a best-effort solution in providing compatibility between legacy and SDN switches while retaining some of the advantages and flexibility of SDN enabled switches.

2.2 Traffic engineering and power management

In some related works, the authors focus on combination of traffic engineering and power management in MPLS/SDN hybrid networks [17, 18, 19, 20]. The authors of [17] propose a methodology for resource consolidation towards minimizing the power consumption in a large network, with a substantial resource over provisioning. The focus is on the operation of the core MPLS networks. The proposed approach is based on a SDN scheme with a reconfigurable centralized controller, which turns off certain network elements.

Some other works, explore the traffic engineering in a SDN/OSPF hybrid network. As an example, the authors of [19] propose a scenario in which the OSPF weights and flows plitting ratio of the SDN nodes can change. The controller can arbitrarily split the flows coming into the SDN nodes. The regular nodes still run OSPF. The proposed algorithm is called SOTE that can obtain a lower maximum link utilization in compared with pure OSPF networks.

2.3 Incremental deployment

Caria et al. [21], propose a method of hybrid SDN/OSPF operation. Their method is different from other hybrid approaches, as it uses SDN nodes to partition an OSPF domain into sub-domains thereby achieving the traffic engineering capabilities comparable to full SDN operation. They place SDN-enabled routers as subdomain border nodes, while the operation of the OSPF protocol continues unaffected. In this way, the SDN controller can tune routing protocol updates for traffic engineering purposes before they are flooded into sub-domains. While local routing inside sub-domains remains stable at all times, inter-sub-domain routes can be optimized by determining the routes in each traversed sub-domain. The authors of [22] propose an algorithm for safely update of hybrid SDN networks.

A system for incremental deployment of hybrid SDN networks consisting of both legacy forwarding devices and programmable SDN switches is presented in [23]. They propose an algorithm to determine which legacy devices to upgrade to SDN and how legacy and SDN devices can interoperate in a hybrid environment to satisfy a variety of traffic engineering (TE) goals such as load balancing and fast failure recovery.

Parameter Pure SDN Hybrid Network Traditional MPLS
Flexibility high *** high *** medium **
Granularity of resource allocation flow-level *** [flow, LSP]-level *** LSP-level **
Computational Complexity high * medium ** medium **
Cost of applying to the current networks high * low ** no cost ***
Configuration easy *** medium ** hard *
Evaluation      *: bad,          **: medium,          ***: good.

Table 1: Comparison of Different Network Architecture for WAN Networks (*: bad,  **: medium,  ***: good)

2.4 Novelty and Comparison

The most important drawbacks of the existing algorithms are categorized into two main classes: a) fixed allocation of resources to the flows and b) do not considering the impact of flows on each other. In order to explain the impact of fixed allocation of resources to the flows, consider flow is routed via path . In most of the existing algorithms, the flow continues streaming from this path even if it reduces/increases its rate by multiple order of magnitude. This results in congestion or low link utilization.

In order to manage or upgrade the MPLS networks, there are three main architectures: 1- pure SDN (all switches are OpenFLow-enabled) 2- hybrid (OpenFlow-enabled and conventional MPLS routers) 3- pure MPLS (conventional MPLS routers). In Table 1, these three architectures are compared from different measurements. In order to simplify the process of understanding the differences, Fig. 1 is depicted. As can be seen, hybrid networks provide a trade-off between different metrics while they are applicable for current MPLS networks.

The major differences of our work with the traditional approaches are as follows: 1) lots of traditional approaches focus on the routing of new flows while our approach (STEM: SDN-based Traffic Engineering in MPLS) focuses on the re-routing of existing flows and re-creation of LSPs 111An LSP is a predetermined path from a source router to a destination router. 2) since STEM considers the effect of flows on each other, it can handle the problem of resource partitioning. 3) despite the traditional algorithms, STEM can be used along with any other routing algorithm. 4) STEM focuses on network reconfiguration overhead and re-routes the flows in a way that minimizes the network reconfiguration overhead 5) STEM adds the flexibility of SDN-based approaches to existing MPLS networks by adding a few number of low-cost OpenFlow-enabled switches to them.

Figure 1: Comparison of Different Network Architecture for WAN Networks.

3 The Proposed Architecture

In this section, problem definition and a quick overview of the proposed architecture is presented, thereafter, the comprehensive details of the proposed architecture components is discussed.

3.1 Problem Definition

The considered network consists of three main parts 1) MPLS routers as the core of the network, 2) low-cost OpenFlow-enabled switches as the edge of the network, and 3) a central controller such as ONOS [24]. All of the MPLS routers and OpenFlow switches are configurable via PCEP and OpenFlow protocols, respectively. Since the Edge switches are all OpenFlow-enabled, the protocol used for communication of these switches and the controller is OpenFlow. Therefore, the controller can query the switches for this part of the network topology and traffic matrix. On the other hand, since the core network runs MPLS, the controller should support PCEP protocol (ONOS controller has a PCE element). PCE element is the component which is responsible for communicating with the MPLS routers via PCEP protocol and assigning the LSP to the links. The controller can gather information from the MPLS routers via querying them, too.

The problem is to find a novel traffic engineering architecture and routing/re-routing algorithm in which the integration of OpenFlow and traditional MPLS is adopted, i.e., proposing an architecture where SDN is going to be deployed in an existing network. The objective is to propose a traffic engineering scheme for integration of MPLS and OpenFlow networks that can adaptively and dynamically manage traffic in a network to accommodate different traffic patterns.

Figure 2: The Proposed Network Architecture.

3.2 Overview of the Proposed Architecture

In this subsection, a brief overview of the proposed architecture and its components is presented. We assume that there is a centralized SDN controller computing the forwarding table for the OF switches as well as providing LSP for MPLS routers. To this end, the controller peers with the network and gathers information about the network traffic and topology. The OF switches along with the forwarding of the packets, do some simple traffic measurement and forward these measurement to the controller. In order to dynamically adapt the network configuration with respect to the traffic variations, the controller exploits this traffic information along with information gathered from the MPLS network to update these tables at the OF switches. It is notable that LSP reconfiguration is mandatory when flow re-routing (in OF switches) is not sufficient for congestion control. It is also worth noting that in our proposal we jointly use the existing protocols along with a new rescheduling algorithm. As can be seen in Fig. 2, the proposed architecture has the following modules:

  • Primary OpenFlow forwarding: a common routing algorithm which runs when there is a new arrival flow, e.g., ECMP protocol.

  • Flow-level resource re-allocator: an algorithm runs when there is a network congestion or the predefined time interval is elapsed.

  • Primary LSP scheduler: an existing LSP scheduler, e.g., RSVP-TE protocol.

  • LSP-level resource re-allocator: an algorithm which runs when the “flow-level resource re-allocator” cannot handle the current network traffic using the existing LSPs and requests a LSP rescheduling.

  • Network monitoring: periodically monitors the links’ state, updates the “knowledge base”, and provides traffic matrix for the “flow-level resource re-allocator” module.

Symbol Definition Symbol Definition
Maximum link utilization Number of switches
Number of LSPs Number of flows
a vector denoting the start nodes of LSPs a matrix denoting the Links bandwidth
a vector denoting the end (destination) nodes of LSPs a vector denoting the maximum tolerable delay of flows
a vector denoting the propagation delay of LSPs a matrix denoting the propagation delay of links
a vector denoting the capacity of LSPs a vector denoting the start node of flows
a vector denoting the rate of flows a vector denoting the end node of flows
Problem Variable (binary) Problem Variable (binary)
a matrix denoting the assignment of flows to the LSPs a matrix denoting the assignment of links to LSPs

Table 2: Symbols Definitions

3.3 Comprehensive Discussion of the Proposed Architecture

In this subsection, each component of the proposed architecture is precisely discussed. It should be mentioned that the Knowledge Base element is used to gather information about the previous states of the network to predict the future state of the network222Network monitoring and traffic prediction algorithms are out of the scope of this work and we consider these elements are designed perfectly..

3.3.1 Primary OpenFlow Forwarding Element

selects an appropriate LSP for the new flows. This component works based on the existing algorithms such as shortest-path or ECMP. Therefore, it is a traditional routing algorithm (not a re-routing algorithm) and it does not consider the impact of flows on each other. This element should be implemented as a part of the controller to enhance the performance of the routing scheme.

3.3.2 Primary LSP Scheduler Element

A Path Computation Element (PCE[25]) is an entity that can compute a path based on a network graph. A Path Computation Client (PCC) is any client application requesting from PCE to compute a path. The Path Computation Element Protocol (PCEP) enables communications between between two PCEs or a PCE and a PCC. Primary LSP Scheduler is a PCE. If a new LSP is required, this component is invoked to create a new LSP. Current controllers such as ONOS[26] and OpenDayLight[27] support PCEP. Since Primary OpenFlow Forwarding works based on the existing protocols, it does not consider the impact of LSPs on each other. This element should be implemented as a part of the controller to enhance the performance of the routing scheme.

3.3.3 Flow-Level Resource Re-Allocator Element

The most important role of OpenFlow switches in our proposed architecture is the assignment of flows to the existing LSPs. This element is designed to control the network congestion. In order to avoid congestion in the links, ”flow-level resource re-allocator element” re-routes some of the flows when the maximum link utilization exceeds a predefined threshold. At this time, it re-assigns flows to the existing LSPs. To this end, we mathematically formulate an optimization problem at which the main aim is to control the traffic congestion by re-assigning the flows to the LSPs subject to the flow tolerable delay, the flow conservation constraint and LSP bandwidth restriction. Besides, the proposed optimization problem minimizes the reconfiguration overhead.

3.3.4 LSP-Level Resource Re-Allocator Element(LR)

The network side-effect of ”flow-level resource re-allocator” is sufficiently lower than the side-effect of this element. Therefore, just in the cases that the ”flow-level resource re-allocator” could not control the network congestion, it sends an LSP-reassignment request to ”LSP-level resource re-allocator”. The LSP re-allocator element re-assigns links to the LSPs to reduce the traffic load of the congested links. To this end, we mathematically formulate an optimization problem at which the main aim is to route requested LSPs subject to the link capacity restriction, LSP conservation constraints, and requested end-to-end propagation delay restriction of LSPs. Besides, the corresponding optimization problem minimizes the network changes to reduce the side-effect of network re-configuration. Since this optimization problem is in form of binary linear programming, we can adopt the well-known and efficient branch and cut method to obtain an optimal solution.

4 Problem Formulation

In order to implement the proposed architecture, two main tasks must be done: 1) re-routing of networks flows (re-assignment of flows to the LSPs) 2) re-creation of LSPs based on the network dynamics. To do these tasks, we mathematically formulate these optimization problems in this section. Table 2 contains all the symbols which are used in the formulations. The variables , , and specify the number of LSPs, switches, and flows, respectively while , , and represent the LSP capacity, link bandwidth, and flow rate, respectively. The vectors and represent the (source, destination) of LSPs and flows, respectively. For each LSP and link, and specify the propagation delay, respectively while specifies the maximum tolerable delay of flows. The assignment of flows to the LSPs is presented using the matrix . Finally, Matrix denotes the assignment of links to the LSPs.

4.1 Flow Re-Routing

In the following, the problem of assigning the ingress flows to the LSPs in a way that minimizes the network reconfiguration overheads is presented. The problem formulation is in form of Binary Linear Programming (BLP).

(1a)
Subject to:
(1b)
(1c)
(1d)
(1e)
(1f)
(1g)

where the objective function (1a) minimizes the reconfiguration overhead by reducing the number of flows that are changed. Eq. (1b) guarantees the rate of flows on each LSP to be less than the LSP’s capacity. Eq. (1c) seeks for the propagation delay of the selected LSP and compares it with the tolerable delay of the flows. Since each flow must assign to one and only one LSP, Eq. (1d) is considered as a part of this optimization problem. Equations (1e) and (1f) ensure that the start and end points of the selected LSP is similar to the start and end of the corresponding flow.

If the required resources of all LSPs are reserved in the MPLS routers (e.g., using RSVP-TE protocol) then the optimization problem (1) is used to re-assign flows to the LSPs. However, if there is one or more LSPs that do not reserve the required resources then the optimization problem should be formulated as follows:

(2a)
Subject to:
(2b)

4.2 LSP Re-Creation

Each LSP is a sequence of links from a specified source to a specified destination. Since the re-creation of the LSPs (re-assignment of links to the LSPs) has effect on the ongoing traffics, we try to concur the traffic dynamic nature via flow re-routing instead of changing the LSPs. However, if the flow re-routing could not handle this dynamicity with the current LSPs then the LSPs should be re-created. In this way, we mathematically formulate the problem of LSP re-creation and explore a solution to solve the corresponding optimization problem. We extend our previous work [28] to match this problem. The formulation is in form of binary linear programming as follows:

(3a)
Subject to:
(3b)
(3c)
(3d)
(3e)
(3f)
(3g)
(3h)

where the objective function (3a) minimizes the reconfiguration overhead by reducing the number of LSPs that are changed. The Eq. (3b) guarantees the rate of flows on each link to be less than the link’s capacity. Eq. (3c) seeks for the propagation delay of the selected path and compares it with the tolerable delay of the requested LSP. Fig. 2(a) and 2(b) illustrate the Eq. (3d) where the streams are enforced to leave the source switches and enter to the destination one.

(a) First Part of Eq. (3d)
(b) Second Part of Eq. (3d)
(c) Eq. (3g)
Figure 3: Visual Illustration of Constraints.

It should be mentioned that the stream cannot return to the source switch or leaves the destination one. To this end, Eq. (3e) is included in this formulation. Fig. 3(a) and 3(b) visually illustrate the mentioned constraint. To prevent loops in the selected paths, Eq. (3g) is considered which is depicted in Fig. 2(c).

(a) First Part of Eq. (3e)
(b) Second Part of Eq. (3e)
Figure 4: Visual Illustration of Constraints.

5 Fast Flow Re-routing Heuristic (FFR)

1:Set of flows, Set of LSPs
2:Assignement of flows to LSPs
3:for each flow in  do
4:     
5:     for each LSP in  do
6:         if  then
7:              set
8:              break
9:         end if
10:     end for
11:     if not flag then
12:         for each LSP in  do
13:              if  then
14:                  set
15:                  break
16:              end if
17:         end for
18:     end if
19:     if  then
20:         assign to
21:         reduce size
22:     else
23:         
24:     end if
25:end for
26:return assignments
Algorithm 1 Fast flow re-routing heuristic

Since the process of flow re-routing should be done in a real-time manner, we propose a heuristic algorithm called Fast Flow Re-routing (FFR) which is presented in Algorithm 1. FFR re-routs one flow in each step (line 1 of the algorithm), however, it considers the impact of previously re-routed flows on the other flows. In other words, when FFR re-routes a flow, it reduces the free capacity of the newly selected LSP. To this end, for each flow it finds all LSPs that have a similar source and destination with the flow and puts them in variable (line 2). After that, in lines 3-8, FRR probes among the ’s elements to find an LSP which has a free capacity more thank the flow size. If such LSP is found then variable would set to .

Sequential assignment of resources may cause resource partitioning. To cope this, if the variable is not set to (line 9) then FFR tries to find a proper LSP by adding the free capacity of links to the LSP and comparing the new LSP capacity with the flow size (lines 10-16). In lines 17-19, the selected LSP is assigned to the flow, however, if a proper LSP is not found then the function would invoked.

5.1 Computational Complexity

In this part we calculate the worst case for computational complexity of FFR. The computational complexity of lines 1, 3, and 10 are , , and , respectively. The computational complexity of is since it should search among all LSPs to find those that are proper for the flow. On the other, since each path is consist of at most hops then is in order of . The computational complexity of is highly dependent on the implementation approach (e.g., reference [29] propose a solution which is linear on the number of flow, switches, and paths); In our simulation we used CVX to solve this function. The computational complexity of the other parts are in order of . Considering as the computational complexity of the function , the computational complexity of FFR is .

6 Performance Evaluation

In this section, the proposed scheme is compared with shortest path algorithm in which the cost function is the length of the path. The evaluation is performed via three different metrics:

  • System throughput: the sum of the data rates that are delivered to all terminals in a network. It is a measure to show the performance of the network;

  • Path length: the average number of steps along the selected paths for all flows. It is a measure of the efficiency of transport on a network;

  • Link utilization: the amount of data on the link divided by the total capacity of the link. It is a measure of protocol fairness.

6.1 Scenario Description

We implement a traffic generator to test the performance of the proposed scheme over different network traffic scenarios. In the traffic generator, the average bandwidth demand of a flow is a fraction of the capacity of links, i.e., it is . Rate of generated flows follows a uniform distribution between 0 and 2 times of the average rate of flows. Moreover, and are input parameters that control the number of generated flows per source switches. More precisely, the number of generated flows for each source switch follows a truncated geometric distribution with as the success probability and as the maximum number of flows. In our experiment we set the , , and has two values . Different traffic scenarios are presented in Table 3.

Scenario 1 0.8 84 10 0.08
Scenario 2 0.8 86 10 0.08
Scenario 3 0.6 71 10 0.08
Scenario 4 0.6 70 10 0.08
Table 3: Different Traffic Scenarios.

6.2 Simulation Setup

In this subsection, the network topology and traffic pattern used in our simulation is described. The topology is inspired by the work [30] and depicted in Fig. 5. For the sake of simplicity, all links’ propagation delay are considered equal. The simulation is done using MATLAB R2016b and the hardware configuration of the PC is represented in Table 4.

Figure 5: The Considered Topology.
Name Description
Processor Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz
IDE Standard SATA AHCI Controller
RAM 4.00 GB
System Type 64-bit Operating System, Windows 10
Table 4: Hardware Configuration.

6.3 Throughput Results

In order to analyze the impact of the proposed scheme on the network throughput, Fig. 6 depicts the network throughput versus time slots. In each time slot the size of flows is increased using uniform distribution by a factor of percent in scenarios 1 and 2, and a factor of percent in scenarios 3 and 4. For example, in scenario 3, the size of traffic flows is increased at most percent in each time slot while the average rate of increment is percent and the minimum rate of increment is .

(a) Scenario 1
(b) Scenario 2
(c) Scenario 3
(d) Scenario 4
Figure 6: Network Throughput.

The proposed scheme considers the impact of flows on each other, therefore it distributes the flows among divers paths. This behaviour, enhances the network throughput significantly. This happens because, in the traditional approaches like shortest path, each flow is routed separately while our scheme uses resource reallocation to prevent resource partitioning and simultaneously prevents congestion. Based on these results, the impact of the proposed scheme is increased by increasing the traffic demands. One of the main reason is that increasing the demands increases the probability of resource partitioning in traditional approaches. Another reason is that the traditional approaches do not consider the dynamicity of demands while our scheme exploits a light weight reconfiguration to manage the dynamic nature of traffic.

6.4 Link Utilization Results

In order to provide a comprehensive analysis, we investigate the impact of the proposed scheme on the average links utilization in different traffic scenarios. Fig. 7 depicts the average links utilization versus the time slots. As can be seen, the results of both approaches are similar in low traffic demands, however, increasing the traffic demand causes congestion in the shortest path and decreases the average links utilization.

(a) Scenario 1
(b) Scenario 2
(c) Scenario 3
(d) Scenario 4
Figure 7: Average Link Utilization.

More precisely, since there is no congestion while the amount of traffic demands are sufficiently lower than the resources, the result of both approaches is similar. However, increasing the traffic demand in all test cases causes the average link utilization of the proposed scheme to grow higher than the results of shortest path. This happens because the throughput of the proposed scheme is higher than shortest path, consequently the total amount of the traffic loaded on links is higher.

6.5 Path Length Results

Fig. 8 depicts the average path length versus the time slots. Increasing the traffic demands, makes the proposed scheme to use several paths with different length to prevent the network congestion. Additionally, since our scheme considers the impact of flows on each other, it uses paths with minimum common links. Therefore, the average path length increases in compared with the traditional approaches.

(a) Scenario 1
(b) Scenario 2
(c) Scenario 3
(d) Scenario 4
Figure 8: Average Path Length.

Considering Fig. 8 and Fig. 6, although the network throughput of the proposed scheme is increased significantly in compared with shortest path, the average path length is similar in the both schemes. In other words, although the proposed scheme uses divers paths to reduce the packet loss, the average end-to-end propagation delay is still comparable with the shortest path. It should be mentioned that the formulation checks the end-to-end delay and assigns LSPs to the flows in a way that the path delay is less than the maximum tolerable delay of the flows.

7 Conclusion

In this paper, a traffic engineering architecture for the hybrid networks of SDN and MPLS introduced. The proposed scheme not only exploits the flexibility of SDN-based approaches but also is applicable on the existing MPLS networks by adding a few number of low-cost OpenFlow-enabled switches. To this end, we mathematically formulated two optimization problems: a) the problem of LSPs re-configuration in MPLS networks when there is a central controller as the PCE element, and b) the problem of flow-level resource re-allocation. The simulation results shows that the proposed scheme increases the network throughput and reduces the total packet loss significantly. Future works will be dedicated on proposing heuristic approaches to consider the energy consumption of the network.

References

  • [1] X. Tu, X. Li, J. Zhou, and S. Chen, “Splicing mpls and openflow tunnels based on sdn paradigm,” in Cloud Engineering (IC2E), 2014 IEEE International Conference on.   IEEE, 2014, pp. 489–493.
  • [2] K. Kirkpatrick, “Software-defined networking,” Communications of the ACM, vol. 56, pp. 16–19, 2013.
  • [3] N. McKeown, “Software-defined networking,” INFOCOM keynote talk, vol. 17, pp. 30–32, 2009.
  • [4] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “Openflow: enabling innovation in campus networks,” ACM SIGCOMM Computer Communication Review, vol. 38, pp. 69–74, 2008.
  • [5] H. Farhady, H. Lee, and A. Nakao, “Software-defined networking: A survey,” Computer Networks, vol. 81, pp. 79–95, 2015.
  • [6] M. Gholami and B. Akbari, “Congestion control in software defined data center networks through flow rerouting,” in Proceeding of the Electrical Engineering (ICEE), 2015 23rd Iranian Conference on.   Tehran, Iran: IEEE, 2015, pp. 654–657.
  • [7] M. M. Tajiki, B. Akbari, and N. Mokari, “QRTP: QoS-aware resource reallocation based on traffic prediction in software defined cloud networks,” in Proceeding of the Telecommunications (IST), 2016 8th International Symposium on.   Tehran, Iran: IEEE, 2016, pp. 527–532.
  • [8] I. F. Akyildiz, A. Lee, P. Wang, M. Luo, and W. Chou, “A roadmap for traffic engineering in SDN-OpenFlow networks,” Computer Networks, vol. 71, pp. 1–30, 2014.
  • [9] K. Marzieh, M. M. Tajiki, and B. Akbari, “SDTE:software defined traffic engineering for improving data center network utilization,” International Journal Information and Communication Technology Research, vol. 8, pp. 15–24, 2016.
  • [10] M. M. Tajiki, B. Akbari, M. Shojafar, and N. Mokari, “Joint qos and congestion control based on traffic prediction in sdn,” Applied Sciences, vol. 7, no. 12, p. 1265, 2017.
  • [11] S. Salsano, P. L. Ventre, F. Lombardo, G. Siracusano, M. Gerola, E. Salvadori, M. Santuari, M. Campanella, and L. Prete, “Hybrid IP/SDN networking: open implementation and experiment management tools,” IEEE Transactions on Network and Service Management, vol. 13, pp. 138–153, 2016.
  • [12] V. Lopez, L. M. Contreras, O. G. de Dios, and J. P. F. Palacios, “Towards a transport SDN for carriers networks: An evolutionary perspective,” in Proceeding of the Networks and Optical Communications (NOC), 2016 21st European Conference on.   Lisbon, Portugal: IEEE, 2016, pp. 52–57.
  • [13] A. Aguado, V. López, J. Marhuenda, Ó. G. de Dios, and J. P. Fernández-Palacios, “ABNO: A feasible sdn approach for multivendor ip and optical networks [invited],” Journal of Optical Communications and Networking, vol. 7, pp. A356–A362, 2015.
  • [14] A. Sgambelluri, F. Paolucci, A. Giorgetti, F. Cugini, and P. Castoldi, “SDN and PCE implementations for segment routing,” in Proceeding of the Networks and Optical Communications-(NOC), 2015 20th European Conference on.   London, UK: IEEE, 2015, pp. 1–4.
  • [15] S. Das, A. R. Sharafat, G. Parulkar, and N. McKeown, “MPLS with a simple OPEN control plane,” in Proceeding of the Optical Fiber Communication Conference and Exposition (OFC/NFOEC), 2011 and the National Fiber Optic Engineers Conference.   Los Angeles, USA: IEEE, 2011, pp. 1–3.
  • [16] H. Lu, N. Arora, H. Zhang, C. Lumezanu, J. Rhee, and G. Jiang, “Hybnet: Network manager for a hybrid network infrastructure,” in Proceedings of the Industrial Track of the 13th ACM/IFIP/USENIX International Middleware Conference.   New York, USA: ACM, 2013, p. 6.
  • [17] A. N. Katov, A. Mihovska, and N. R. Prasad, “Hybrid SDN architecture for resource consolidation in MPLS networks,” in Proceeding of the Wireless Telecommunications Symposium (WTS), 2015.   California, USA: IEEE, 2015, pp. 1–8.
  • [18] M. M. Tajiki, S. Salsano, M. Shojafar, L. Chiaraviglio, and B. Akbari, “Joint energy efficient and qos-aware path allocation and vnf placement for service function chaining,” arXiv preprint arXiv:1710.02611, 2017.
  • [19] Y. Guo, Z. Wang, X. Yin, X. Shi, and J. Wu, “Traffic engineering in SDN/OSPF hybrid network,” in Proceeding of the Network Protocols (ICNP), 2014 IEEE 22nd International Conference on.   California, USA: IEEE, 2014, pp. 563–568.
  • [20] M. M. Tajiki, S. Salsano, M. Shojafar, L. Chiaraviglio, and B. Akbari, “Energy-efficient path allocation heuristic for service function chaining,” in Proceedings of the 2018 21th Conference on Innovations in Clouds, Internet and Networks (ICIN), Paris, France, 2018, pp. 20–22.
  • [21] M. Caria, T. Das, and A. Jukan, “Divide and conquer: Partitioning OSPF networks with SDN,” in Proceeding of the Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on.   Ottawa, USA: IEEE, 2015, pp. 467–474.
  • [22] S. Vissicchio, L. Vanbever, L. Cittadini, G. G. Xie, and O. Bonaventure, “Safe update of hybrid sdn networks,” IEEE/ACM Transactions on Networking, 2017.
  • [23] D. K. Hong, Y. Ma, S. Banerjee, and Z. M. Mao, “Incremental deployment of SDN in hybrid enterprise and ISP networks,” in Proceedings of the Symposium on SDN Research.   Santa Clara, USA: ACM, 2016, p. 1.
  • [24] P. Berde, M. Gerola, J. Hart, Y. Higuchi, M. Kobayashi, T. Koide, B. Lantz, B. O’Connor, P. Radoslavov, W. Snow et al., “Onos: towards an open, distributed sdn os,” in Proceedings of the third workshop on Hot topics in software defined networking.   ACM, 2014, pp. 1–6.
  • [25] J. Vasseur and J. Le Roux, “RFC 5440–path computation element (PCE) communication protocol (PCEP),” Internet Engineering Task Force (IETF), 2009.
  • [26] P. Berde, M. Gerola, J. Hart, Y. Higuchi, M. Kobayashi, T. Koide, B. Lantz, B. O’Connor, P. Radoslavov, W. Snow et al., “ONOS: towards an open, distributed SDN OS,” in Proceedings of the third workshop on Hot topics in software defined networking.   Chicago, USA: ACM, 2014, pp. 1–6.
  • [27] J. Medved, R. Varga, A. Tkacik, and K. Gray, “Opendaylight: Towards a model-driven SDN controller architecture,” in Proceeding of the A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on.   Sydney, Australia: IEEE, 2014, pp. 1–6.
  • [28] M. M. Tajiki, B. Akbari, and N. Mokari, “Optimal QoS-aware network reconfiguration in software defined cloud data centers,” Computer Networks, vol. 120, pp. 71–86, 2017.
  • [29] M. Tajiki, B. Akbari, M. Shojafar, S. H. Ghasemi, M. Barazandeh, N. Mokari, L. Chiaraviglio, and M. Zink, “CECT: computationally efficient congestion-avoidance and traffic engineering in software-defined cloud data centers,” arXiv preprint arXiv:1710.03611, 2018.
  • [30] H. Hasan, J. Cosmas, Z. Zaharis, P. Lazaridis, and S. Khwandah, “Creating and managing dynamic MPLS tunnel by using SDN notion,” in Proceeding of the Telecommunications and Multimedia (TEMU), 2016 International Conference on.   Crete, Greece: IEEE, 2016, pp. 1–8.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
133537
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description