In contrast to the control plane, the data pane, which is also known as the forwarding plane, is responsible for next hop forwarding to the destination network based upon control plane logic. As we see in the diagram, the Southbound interface between controllers and network devices is using the OpenFlow protocol, whereas the Northbound of portion of the control layer is utilizing the API to sustain the services and applications.
SDN Development Barriers
SDN Latency Issues
SDN clearly is an integral component of next-generation carrier services and networks. However, though control and data functionality is compartmentalized to facilitate high flexibility, as an enterprise network size grows a SDN controller may experience difficulty managing its’ burgeoning traffic. As such, controllers have the propensity to become bottlenecks in our network (Xavier & Seol, 2014). This is particularly evident in high traffic data centers where routes must be set up within a few hundred milliseconds, and in cellular networks, where routes must be designated within30-40ms to ensure efficient connectivity.
A few issues that influence latency are the speed of control programs, switch and controller responsiveness when modifying forwarding state, and the latency that occurs with the logically central controller. Another factor is inbound latency, which may average 8ms, as flow is processed for the first time, and the switch is generating events. Outbound latency may also be high, averaging 3ms and 30ms per rule for insertion and modification of forwarding rules. Also of note, there may be significant latency differences between switches with different chipsets and firmware. This is borne out by (Ferguson et al., 2013), who postulate that the primary causal factor for latency is hardware design. Specifically, rules must be organized in switch hardware tables in priority order, and concurrent switch control actions must contend for finite bus bandwidth between the CPU and ASIC of the switch.
To expand upon the issue of latency, we must remember that applications determine the routes traffic should take. Data packet events and state update operations will be conveyed by the OpenFlow API to permit communication between the switch and controller. Yet, we must also bear in mind, that while control plane logic occurs between the switch and central controller, switches still perform numerous steps in furtherance of packet event generation and forwarding state updating, as outlined below.
As shown below in Fig. 2, the schematic of an OpenFlow switch, upon the arrival of a packet, the ASIC executes a lookup in the forwarding table of the switch. Should there be a match, then the packet will be forwarded at line rate. If there is not a match, the packet is sent by the ASIC via the PCIe bus to the CPU of the switch, as indicated at (I2). This will generate an OS interrupt, and the ASIC SDK sends the packet to the switch-side OpenFlow agent, as indicated at (I3). This causes the agent to initiate, process the packet, and send the controller a ‘packet_in’ message, with metadata and the first 128B of the packet. As may be deduced, these three steps induce inbound latency while the controller receives the message and creates a packet_in message.
Other latency factors are the forwarding table updates, which occur when the controller sends ‘flow_mod’ messages that modify the forwarding tables of the switch. As shown in (O1), the switch will generate a flow_mod message by means of the Open-Flow agent on the CPU parsing the message. At (O2) the agent will modify the forwarding rule in the hardware tables, which customarily will occur in the TCAM. At (O3) the chip SDK may cause current rules in the tables to be rearranged in order to deal with possible issues of high priority rules. At (O4) the rule will be inserted or removed as required into the hardware table. Here again, as with the three steps of packet_in message creation, the four steps of flow_mod action execution, induce outbound latency.
In an SDN, applications for network management run atop an underlying logically centralized controller. However, there is not a prevailing standard for controller interfaces, though industry organizations such as the ONF have attempted to standardize a Northbound API controller for interacting with applications (Fundation, 2012). However, the use of an API with multiple third-party applications using the same controller may pose a security threat to a SDN. A malicious application may exploit the API and breach the SDN network (Klaedtke, Karame, Bifulco & Cui, 2014). This threat may be more egregrious in a situation where competing companies are leasing network slices, sharing network resources, and the leasing tenants install their own third-party applications on the controller.
Lack of Standards
At present, there is no standardized API, or high-level programming languages for the development of SDN applications. Indeed, we see that the control software provides its own API to applications. As regards SDN controllers, there are many platform implementations, such as Python, C++, and Java, proffered by entities such as Stanford, Rice University, Big Switch, NEC, NTT OSRG group, Linux Foundation and others. Furthermore, the controllers may be used with OpenFlow, NOX, HyperFlow, DevoFlow, Onix, et cetera (Xavier & Seol, 2014). For enterprise entities considering a specific SDN manufacturer to purchase their devices from, this may be a major factor to consider when looking at devices to deploy.