Delay Evaluation of OpenFlow Network Based on Queueing Model
As one of the most popular south-bound protocol of software-defined networking(SDN), OpenFlow decouples the network control from forwarding devices. It offers flexible and scalable functionality for networks. These advantages may cause performance issues since there are performance penalties in terms of packet processing speed. It is important to understand the performance of OpenFlow switches and controllers for its deployments. In this paper we model the packet processing time of OpenFlow switches and controllers. We mainly analyze how the probability of packet-in messages impacts the performance of switches and controllers. Our results show that there is a performance penalty in OpenFlow networks. However, the penalty is not much when probability of packet-in messages is low. This model can be used for a network designer to approximate the performance of her deployments.
As a new computer network architecture, SDN is considered a promising way towards the future Internet , and OpenFlow is a popular implementation of SDNs. OpenFlow was first proposed by Nick McKeown to enable research experiments . It decouples the control plane from forwarding devices and allows one separate controller to manipulate all the switches in a network. The separation makes the network very flexible and innovative. As its core advantage, OpenFlow offers a high flexibility in the control plane and this enables to change the routing of some traffic flows without influencing others. It makes reaction to changes of network topology or demands graceful.
In OpenFlow networks, switches make no decisions, they can only forward packets following instructions given by a centralized controller. This offers a high flexibility to packets in networks, such as adding new features to a network without changing the hardware, configuring and deploying devices automatically. However, OpenFlow has the disadvantage that the additional functionality requires communication between the controller and the switches, which may cause additional delay for packet processing, especially in a large network . OpenFlow continues to receive a lot of research attention. However, most work focuses on availability, scalability and functionality. The performance of OpenFlow networks has not been investigated much to date. This may become an obstacle for wide deployment. It is a prerequisite to understand the performance and limitation of OpenFlow networks for its usage in production environment. Siamak Azodolmolky et al. presented a model based on network calculus theory to evaluate the performance of controllers and switches , they defined a closed form of packet delay and buffer length inside OpenFlow switches. Jarschel et al. presented a model for the forwarding speed and blocking probability of an OpenFlow network and validated it in OMNeT++ , their result showed that the packet sojourn time mainly depends on the controller performance for installing new flows. In , Bianco et al. studied the data plane performance using different frame size. They showed that the implementation in Linux offers good performance, and throughput becomes worse when there are a lot of small packets.
Flow entry installations and modifications in different OpenFlow switches lead to highly variable latency and this must be considered during the design process of OpenFlow applications and controllers. To address this issue, Bozakov et al. characterized the behavior of the control interface using a queueing model . A controller to an OpenFlow network is what an operating system is to a computer. It administrates forwarding devices and provides an application interface to the users. It plays a very important role in an entire network. The performance of controllers influences networks severely. There are more than 30 different controllers developed by different organizations written in different programming languages. That makes each controller better suited for certain scenarios that others. In , the authors developed a framework, which can be used to test the performance capability of controllers. They also analyzed performance of popular open-source OpenFlow controllers.
Understanding the performance of OpenFlow networks is an essential issue for experiments or deployments. We aim to provide a simple performance model of OpenFlow network to help designers with estimating the performance of their design. The rest of this paper is organized as follows. In Section II we provide a overview on OpenFlow architecture. Our model is introduced in Section III. In Section IV, we evaluate the model by utilizing numerical analysis. We conclude the paper in Section V.
Ii Overview of OpenFlow network
In order to understand the model of OpenFlow networks better, we give a brief overview on OpenFlow. More details can be found in the OpenFlow switch specification .
OpenFlow was designed to enable researchers to test their new ideas on an existing network without influencing other traffic. However, its flexibility made it being used beyond research. Today, OpenFlow has been deployed in some commercial scenario [10, 11]. A growing number of research activities to OpenFlow networks are expected.
OpenFlow consists of two components: the OpenFlow switches and the controller. In OpenFlow networks, all decisions are made by a centralized controller. The controller is usually implemented as a piece of software which all the switches in an OpenFlow network connect to and receive instructions from. The controller can add, delete and modify flow entries in OpenFlow switches proactively and reactively. An OpenFlow switch contains one or more flow tables, which perform packet lookups. Each flow table contains a set of flow entries, and a flow entry contains match fields, a set of instructions and counters. The controller and switches communicate with each other via the OpenFlow protocol.
When a packet arrives at an OpenFlow switch, its header is extracted and used as lookup key. If the lookup finds a flow entry, instructions for the flow entry are executed and the packet may be forwarded to an egress port, modified or dropped. Otherwise, the packet is forwarded to the controller, which handles it according to controller applications. The controller may install flow entries into the switch, so that similar arriving packets need not be forwarded to the controller again. This process is shown in Figure 1.
Iii Queuing model of OpenFlow network
A typical OpenFlow network is shown in Figure 2. All switches connect to a controller, and traffic comes from the computers. Every switch keeps a packet queue at each ingress port. All packets from computers arrive at their access switch and join that queue. If a packet matches a flow entry, it will be forwarded. Otherwise, it will be sent to the controller via a packet-in message. These two operations take different amount of time. When a switch sends a packet-in message to its controller, the controller may send a packet-out message to the switch, which tells the switch to forward the packet, or a flow-modify message, which will install a new flow entry in the switch and apply instructions to the packet. For simplicity, we suppose that the controller only responds a flow-modify message to switches in this paper. Because a packet-out message does not trigger flow entry installation, it can forward only one packet. In a controller, there is a queue for packet-in messages. When packet-in messages arrive at the controller, the controller processes them with FIFO strategy.
We consider a queueing model for OpenFlow network as depicted in Figure 3. The switches and controller are modelled as queueing nodes to capture the time cost on these devices. We assume that the packet arrival process in the computer network follows a Poisson Process and the average arrival rate in the th switch is , and that the arrivals in different switches are independent. Packets may not match any flow entries in which case they are forwarded to the controller via packet-in message. This happens with probability . As in , packets are classified into two classes, both of them arrive in a Poisson process with an average arrival rate of and . The packet service time of switches is assumed to follow an exponential distribution, and the expected service time is denoted and respectively. The mean service time of packet-in messages in the controller is denoted . This service time includes the transmission time from the switches to the controller.
To simplify this model, both controller and switches are powerful enough for the traffic in the network, and there is no limit on the queue capacity. We queue all the packets arriving at a switch in a single queue instead of a separate queue on each ingress port and all the packets are processed in order of arrival time. Moreover, we assume that when the first packet of a connection arrives at a switch, the controller installs a flow entry. After that, the remaining packets arrive to the switch and are forwarded directly. We also assume that all the switches in our model have the same service rate, and the packet-in messages arrive the switch following a Poisson process.
Iii-a Performance of OpenFlow switches
The flow entry matching for all packets are assumed to be independent  and the packet processing time can be supposed to follow an exponential distribution. With the assumptions above the performance of OpenFlow switches can be modeled as a queue, which means packets arrive at the th switch at rate and the service time is represented by a two-phase hyperexponential distribution. With probability a packet receives service at rate , while with probability it receives service at rate . The state transition diagram of this queue is shown in Figure 4. A state is represented by a pair, where is the total number of packets in the switch and is the current service phase. In our case can be only 1 or 2. The stationary distribution of this queue in the th switch can be obtained by Neuts’ Matrix-Geometric Method . We denote the stationary probability vector as
where is the probability of packets in the th switch. Then the average number of packets in the queueing system can be computed as:
The main purpose of this queue is to show the packet processing time of switches. According to Littleâs Law, the average packet processing time in the th switch can be given by
The average packet processing time of switches can be given by
Iii-B Performance of OpenFlow controller
If a packet can be forwarded without consulting the controller, its forwarding time only depends on the performance of the switches. Otherwise the switch must wait for instructions from the controller. If a controller is in charge of switches, packet arrival at the th switch is a Poisson process and with probability a packet is sent to the controller. We can conclude that the rate of packet-in messages from switches that arrive the controller is in (4).
To simplify matters, we assume that packet-in messages arrive to the controller following a Poisson process with parameter and the time of the controller processing these packet-in messages follows an exponential distribution. The number of packet-in messages in the controller can define the states of a Markov chain, whose state transition diagram is illustrated in Figure 5.
With the above analysis, we can characterize the packet-in message processing in a controller with the M/M/1 queueing model. Let be the probability of packet-in messages in the controller. The mean number of packet-in messages in the controller is:
And the mean time a packet-in message in the controller is:
The sum of processing delay of packets arriving at the switch is:
Iv Numerical evaluation
In this section we evaluate our queueing model with different parameters by numerical analysis. Suppose packets arrive at all the switches with same rate , the rate at which the switches forward packets to the controller and output ports is packets per second and packets per second.
We can see in Figure 6 that the average service time of switches increases with the probability of receiving a packet-in message. When packets arrive at switches at a low rate(), the average service time increases slowly. When packets arrive at switches at a high rate(), the average service time increases slowly for low load (until ), and increases sharply after . That means if traffic is close to the capability of the switches packet-in messages may impact the performance of the network. The value indicates that the controller handles all the packets going through the switches, which is very abnormal, but it may happen in some scenarios such as when testing new protocols or rebooting a switch. And means that a switch can forward all the packets it receives without requesting the controller. We can see that the service time for is about ten times as long as the service time for when packets arrive at switches at per second, which means packet messages may impair the performance of network significantly.
In  it was shown that the probability of packet-in is in a normal productive network. So we can conclude that deploying OpenFlow does not decrease the performance of networks in normal situations. But adding switches to an existing network will cause many requests to the controller, because these new switches have no flow entries installed. At first they can not forward any packets. And the forwarding time of these new switches is much longer than that of others which may reduce the performance of the network.
Assume a controller connects to switches, and it processes packet-in messages at a rate of per second. Keeping the probability of packet-in message at , and the rate of switches forwarding packets to the controller and output ports is packets per second and packets per second. Assume further that packets arrive at switches at the same rate . As shown in Figure 7 the service time of a controller increases with the number of switches, and the more traffic arrives to the switches, the faster it increases. All the three curves increase very slowly. When packets arrive at switches at per second the service time of a controller connecting to 50 switches is only about more than it is when connecting to one switch.
Figure 8 shows the service time of a controller connected to 10 switches. The controller processes packet-in messages per second. All ten switches connecting to it receive packets per second. and the rate of switches forwarding packets to the controller and output ports stays at packets per second and packets per second. As shown in Figure 8 the way influences the performance of the controller is similar to the way it influences the switches. When packets arrive at switches at a low rate(), the probability of packet-in messages is not an issue for a network. However, when packets arrive to switches at a high rate(), the performance of a network decreases rapidly with the probability of packet-in messages.
V Conclusion and future work
This paper addresses the influence of the probability of packet-in messages to the performance of OpenFlow networks based on a queueing model. We capture the packets’ sojourn time in both switches and controllers. The numerical analysis shows that the performance of switches is not an issue to OpenFlow networks when the probability of packet-in messages is low.
Although our derivation of the results are based on the assumption that packets arrive at switches and controllers following a Poisson process, this simple queueing model can be used to describe a real OpenFlow network. The approach can be adjusted to other arrival processes with arbitrary distributions of arriving time interval.
In our future work, we will build more accurate model for OpenFlow switches, in which buffer size and flow table size will be considered. We will also investigate how many switches a controller can handle without much performance penalty.
-  S. Rowshanrad, S. Namvarasl, V. Abdi, M. Hajizadeh, and M. Keshtgary, “A survey on sdn, the future of networking,” Journal of Advanced Computer Science & Technology, vol. 3, no. 2, p. 232, 2014.
-  N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “Openflow: enabling innovation in campus networks,” ACM SIGCOMM Computer Communication Review, vol. 38, no. 2, pp. 69–74, 2008.
-  F. Benamrane, M. B. Mamoun, and R. Benaini, “Short: A case study of the performance of an openflow controller,” in Networked Systems. Springer, 2014, pp. 330–334.
-  S. Azodolmolky, R. Nejabati, M. Pazouki, P. Wieder, R. Yahyapour, and D. Simeonidou, “An analytical model for software defined networking: A network calculus-based approach,” in 2013 IEEE Global Communications Conference (GLOBECOM), Dec 2013, pp. 1397–1402.
-  M. Jarschel, S. Oechsner, D. Schlosser, R. Pries, S. Goll, and P. Tran-Gia, “Modeling and performance evaluation of an openflow architecture,” in Proceedings of the 23rd international teletraffic congress. International Teletraffic Congress, 2011, pp. 1–7.
-  A. Bianco, R. Birke, L. Giraudo, and M. Palacin, “Openflow switching: Data plane performance,” in Communications (ICC), 2010 IEEE International Conference on. IEEE, 2010, pp. 1–5.
-  Z. Bozakov and A. Rizk, “Taming sdn controllers in heterogeneous hardware environments,” in 2013 Second European Workshop on Software Defined Networks. IEEE, 2013, pp. 50–55.
-  A. Shalimov, D. Zuikov, D. Zimarina, V. Pashkov, and R. Smeliansky, “Advanced study of sdn/openflow controllers,” in Proceedings of the 9th central & eastern european software engineering conference in russia. ACM, 2013, p. 1.
-  “Openflow switch specification,” 2012.
-  H. Zekun, “Sdn solution on tencent network between data center,” Designing Techniques of Posts and Telecommunication, no. 5, pp. 59–62, 2014.
-  S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A. Singh, S. Venkata, J. Wanderer, J. Zhou, M. Zhu et al., “B4: Experience with a globally-deployed software defined wan,” ACM SIGCOMM Computer Communication Review, vol. 43, no. 4, pp. 3–14, 2013.
-  A. Papoulis and S. U. Pillai, Probability, random variables, and stochastic processes. Tata McGraw-Hill Education, 2002.
-  B. Xiong, K. Yang, J. Zhao, W. Li, and K. Li, “Performance evaluation of openflow-based software-defined networks based on queueing model,” Computer Networks, vol. 102, pp. 172 – 185, 2016.
-  W. J. Stewart, Probability, Markov chains, queues, and simulation: the mathematical basis of performance modeling. Princeton University Press, 2009.
-  F. Wamser, R. Pries, D. Staehle, K. Heck, and P. Tran-Gia, “Traffic characterization of a residential wireless internet access,” Telecommunication Systems, vol. 48, no. 1-2, pp. 5–17, 2011.