OpenFlow, the new networking technology that recently burst out of academia and into industry, has generated considerable buzz since Interop Las Vegas 2011. The protocol is simple but its implications on network architectures and the overall US$16 billion switching market are far-reaching.
Origins
OpenFlow began at a consortium of universities, led by Stanford and Berkeley, as a way for researchers to use enterprise-grade Ethernet switches as customizable building blocks for academic networking experiments. They wanted their server software to have direct programmatic access to a switch’s forwarding tables, and so they created the OpenFlow protocol. The protocol itself is quite minimal — a 27-page spec that is an extremely low-level, yet powerful, set of primitives for modifying, forwarding, queuing and dropping matched packets. OpenFlow is like an x86 instruction set for the network, upon which layers of software can be built.
Three years ago, OpenFlow was driven entirely by a small number of universities and (gracious) switch vendors who believed in supporting research. OpenFlow allowed those researchers to test brand-new protocol designs and ideas safely on slices of production networks and traffic.
Two years ago, the programmability of OpenFlow started to attract interest from hyper-scale data centre networking teams looking for a way to support massive map-reduce/Hadoop clusters. These clusters have a very specific network requirement: Every server needs equal networking bandwidth to every other server, a requirement commonly known as “full cross-sectional bandwidth.” (Note this is not exactly the norm in large data centers today — over-subscription of 8x-32x is often seen to control costs.)
Today, it is this multi-tenant networking use of OpenFlow that is leading the way as OpenFlow moves from the domain of the hyper-scale data centers to IaaS providers and the enterprise data center.
Architecture in practice
Most current OpenFlow solutions incorporate a three-layer architecture, where the first layer is comprised of the all-important OpenFlow-enabled Ethernet switches. Typically, these are physical Ethernet switches that have the OpenFlow feature enabled. We’ve also seen OpenFlow-enabled hypervisor/software switches and OpenFlow-enabled routers. More devices are certainly coming.
There are two layers of server-side software: an OpenFlow Controller and OpenFlow software applications built on top of the Controller.
The Controller is a platform that speaks southbound directly with the switches using the OpenFlow protocol. Northbound, the Controller provides a number of functions for the OpenFlow software applications — these include marshalling the switch resources into a unified view of the network and providing coordination and common libraries to the applications.
At the top layer, the OpenFlow software applications implement the actual control functions for the network, such as switching and routing. The applications are simply software written on top of the unified network view and common libraries provided by the Controller. Thus, those applications can focus on implementing a particular control algorithm and then can leverage the OpenFlow layers below it to instantiate that algorithm in the network.
This three-layer OpenFlow Architecture should feel very familiar to software architects. For example, consider the Web application server architecture: applications sitting on top of a Web application server sitting on top of a database layer. Each of the lower layers presents an abstraction/API upward that simplifies the design of the layers above it.
Today, the term “OpenFlow” is used in two senses — it can either refer to the “OpenFlow Protocol,” the tightly defined “x86 instruction set for the network,” or to an “OpenFlow Architecture,” with its layers of switches, controllers and applications.
Disruption
OpenFlow has been a controversial topic in the networking industry in part because of early claims that the goal was to commoditize switching hardware. Obviously, given that the protocol requires cooperation between switching hardware and controller software, this goal was something of a non-starter for the switching partners that needed to get involved in the effort. While this debate still goes on in some corners of the industry, most of the companies sitting close to OpenFlow have already seen that it is a way to accelerate innovation and actually differentiate their hardware and overall solutions.
The big picture is that OpenFlow and the larger movement in the networking industry called “Software-Defined Networking” promise true disruption because they enable rapid innovation — new networking functionality implemented as a combination of software applications and programmable devices, effectively bypassing the multi-year approval/implementation stages of traditional networking protocols. This acceleration is possible because of the layered design of the software/hardware architecture.
Now come back to networking, an industry ripe for this same sort of transformation. Today, OpenFlow architectures are starting to be deployed with a few targeted applications being rolled out. Over the next few months and years, we should see a steady progression of new networking software applications coming to market from new ecosystems of companies, delivering to customers exactly what they need from the network. That’s the grand vision, and that’s what’s causing all the buzz.
(From Network World U.S.)