Stop Thinking, Just Do!

Sungsoo Kim's Blog

Software-Defined Networks

tagsTags

10 June 2014



Introduction to Software Defined Networks

Part 1, Virtualization Basics

Part 1 introduces server and network virtualization, as well as virtual machine (VM) mobility, as a background for introducing Software Defined Networks (aka network virtualization) in Part 2.


Part 2, The Future of Networking - Now

Part 2 discusses the drivers of Software Defined Networks (SDN), what SDN is and how it differs from conventional networking, SDN in the data center, and impacts and ramifications of SDN.


Software-Defined Networks

Up until a few years ago, storage, computing, and network resources were intentionally kept physically and operationally separate from one another. Even the systems used to manage those resources were separated—often physically. Applications that interacted with any of these resources, such as an operational monitoring system, were also kept at arm’s length significantly involved access policies, systems, and access procedures all in the name of security. This is the way IT departments liked it. It was really only after the introduction of (and demand for) inexpensive computing power, storage, and networking in data center environments that organizations were forced to bring these different elements together. It was a paradigm shift that also brought applications that manage and operate these resources much, much closer than ever before.

Data centers were originally designed to physically separate traditional computing elements (e.g., PC servers), their associated storage, and the networks that interconnected them with client users. The computing power that existed in these types of data centers became focused on specific server functionality—running applications such as mail servers, database servers, or other such widely used functionality in order to serve desktop clients. Previously, those functions—which were executed on the often thousands (or more) of desktops within an enterprise organization—were handled by departmental servers that provided services dedicated only to local use. As time went on, the departmental servers migrated into the data center for a variety of reasons—first and foremost, to facilitate ease of management, and second, to enable sharing among the enterprise’s users.

It was around 10 years ago that an interesting transformation took place. A company called VMware had invented an interesting technology that allowed a host operating system such as one of the popular Linux distributions to execute one or more client operating systems (e.g., Windows). What VMware did was to create a small program that created a virtual environment that synthesized a real computing environment (e.g., virtual NIC, BIOS, sound adapter, and video). It then marshaled real resources between the virtual machines. This supervisory program was called a hypervisor.

Originally, VMware was designed for engineers who wanted to run Linux for most of their computing needs and Windows (which was the corporate norm at the time) only for those situations that required that specific OS environment to execute. When they were finished, they would simply close Windows as if it were another program, and continue on with Linux. This had the interesting effect of allowing a user to treat the client operating system as if it were just a program consisting of a file (albeit large) that existed on her hard disk. That file could be manipulated as any other file could be (i.e., it could be moved or copied to other machines and executed there as if it were running on the machine on which it was originally installed). Even more interestingly, the operating system could be paused without it knowing, essentially causing it to enter into a state of suspended animation.

With the advent of operating system virtualization, the servers that typically ran a single, dedicated operating system, such as Microsoft Windows Server, and the applications specifically tailored for that operating system could now be viewed as a ubiquitous computing and storage platform. With further advances and increases in memory, computing, and storage, data center compute servers were increasingly capable of executing a variety of operating systems simultaneously in a virtual environment. VMware expanded its single-host version to a more data-center-friendly environment that was capable of executing and controlling many hundreds or thousands of virtual machines from a single console. Operating systems such as Windows Server that previously occupied an entire “bare metal” machine were now executed as virtual machines, each running whatever applications client users demanded. The only difference was that each was executing in its own self-contained environment that could be paused, relocated, cloned, or copied (i.e., as a backup). Thus began the age of elastic computing.

Within the elastic computing environment, operations departments were able to move servers to any physical data center location simply by pausing a virtual machine and copying a file. They could even spin up new virtual machines simply by cloning the same file and telling the hypervisor to execute it as a new instance. This flexibility allowed network operators to start optimizing the data center resource location and thus utilization based on metrics such as power and cooling. By packing together all active machines, an operator could turn down cooling in another part of a data center by sleeping or idling entire banks or rows of physical machines, thus optimizing the cooling load on a data center. Similarly, an operator could move or dynamically expand computing, storage, or network resources by geographical demand.

As with all advances in technology, this newly discovered flexibility in operational deployment of computing, storage, and networking resources brought about a new problem: one not only of operational efficiency both in terms of maximizing the utilization of storage and computing power, but also in terms of power and cooling. As mentioned earlier, network operators began to realize that computing power demand in general increased over time. To keep up with this demand, IT departments (which typically budget on a yearly basis) would order all the equipment they predicted would be needed for the following year. However, once this equipment arrived and was placed in racks, it would consume power, cooling, and space resources—even if it was not yet used! This was the dilemma discovered first at Amazon. At the time, Amazon’s business was growing at the rate of a “hockey stick” graph—doubling every six to nine months. As a result, growth had to stay ahead of demand for its computing services, which served its retail ordering, stock, and warehouse management systems, as well as internal IT systems. As a result, Amazon’s IT department was forced to order large quantities of storage, network, and computing resources in advance, but faced the dilemma of having that equipment sit idle until the demand caught up with those resources. Amazon Web Services (AWS) was invented as a way to commercialize this unused resource pool so that it would be utilized at a rate closer to 100%. When internal resources needed more resources, AWS would simply push off retail users, and when it was not, retail compute users could use up the unused resources. Some call this elastic computing services, but this book calls it hyper virtualization.

It was only then that companies like Amazon and Rackspace, which were buying storage and computing in huge quantities for pricing efficiency, realized they were not efficiently utilizing all of their computing and storage and could resell their spare computing power and storage to external users in an effort to recoup some of their capital investments. This gave rise to a multitenant data center. This of course created a new problem, which was how to separate thousands of potential tenants, whose resources needed to be spread arbitrarily across different physical data centers’ virtual machines.

Another way to understand this dilemma is to note that during the move to hyper virtualized environments, execution environments were generally run by a single enterprise or organization. That is, they typically owned and operated all of the computing and storage (although some rented co-location space) as if they were a single, flat local area network (LAN) interconnecting a large number of virtual or physical machines and network attached storage. (The exception was in financial institutions where regulatory requirements mandated separation.) However, the number of departments in these cases was relatively small—fewer than 100—and so this was easily solved using existing tools such as layer 2 or layer 3 MPLS VPNs. In both cases, though, the network components that linked all of the computing and storage resources up until that point were rather simplistic; it was generally a flat Ethernet LAN that connected all of the physical and virtual machines. Most of these environments assigned IP addresses to all of the devices (virtual or physical) in the network from a single network (perhaps with IP subnets), as a single enterprise owned the machines and needed access to them. This also meant that it was generally not a problem moving virtual machines between different data centers located within that enterprise because, again, they all fell within the same routed domain and could reach one another regardless of physical location. In a multitenant data center, computing, storage, and network resources can be offered in slices that are independent or isolated from one another. It is, in fact, critical that they are kept separate. This posed some interesting challenges that were not present in the single tenant data center environment of the past. Keep in mind that their environment allowed for the execution of any number of operating systems and applications on top of those operating systems, but each needed a unique network address if it was to be accessed by its owner or other external users such as customer. In the past, addresses could be assigned from a single, internal block of possibly private addresses and routed internally easily. Now, however, you needed to assign unique addresses that are externally routable and accessible. Furthermore, consider that each virtual machine in question had a unique layer 2 address as well. When a router delivers a packet, it ultimately has to deliver a packet using Ethernet (not just IP). This is generally not an issue until you consider virtual machine mobility (VM mobility). In these cases, virtual machines are relocated for power, cooling, or computing compacting reasons. In here lies the rub because physical relocation means physical address relocation. It also possibly means changes to layer 3 routing in order to ensure packets previously destined for that machine in its original location can now be changed to its new location.

At the same time data centers were evolving, network equipment seemed to stand still in terms of innovations beyond feeds and speeds. That is, beyond the steady increase in switch fabric capacities and interface speeds, data communications had not evolved much since the advent of IP, MPLS, and mobile technologies. IP and MPLS allowed a network operator to create networks and virtual network overlays on top of those base networks much in the way that data center operators were able to create virtual machines to run over physical ones with the advent of computing virtualization. Network virtualization was generally referred to as virtual private networks (VPN) and came in a number of flavors, including point-to-point (e.g., a personal VPN as you might run on your laptop and connect to your corporate network); layer 3 (virtualizing an IP or routed network in cases such as to allow a network operator to securely host enterprise in a manner that isolated their traffic from other enterprise); and layer 2 VPNs (switched network virtualization that isolates similarly to a layer 3 VPN except that the addresses used are Ethernet).

Commercial routers and switches typically come with management interfaces that allow a network operator to configure and otherwise manage these devices. Some examples of management interfaces include command line interfaces, XML/Netconf, graphical user interfaces (GUIs), and the Simple Network Management Protocol (SNMP). These options provide an interface that allows an operator suitable access to a device’s capabilities, but they still often hide the lowest levels of details from the operator. For example, network operators can program static routes or other static forwarding entries, but those ultimately are requests that are passed through the device’s operating system. This is generally not a problem until one wants to program using syntax or semantics of functionality that exists in a device. If someone wishes to experiment with some new routing protocol, they cannot on a device where the firmware has not been written to support that protocol. In such cases, it was common for a customer to make a feature enhancement request of a device vendor, and then typically wait some amount of time (several years was not out of the ordinary).

At the same time, the concept of a distributed (at least logically) control plane came back onto the scene. A network device is comprised of a data plane that is often a switch fabric connecting the various network ports on a device and a control plane that is the brains of a device. For example, routing protocols that are used to construct loop-free paths within a network are most often implemented in a distributed manner. That is, each device in the network has a control plane that implements the protocol. These communicate with each other to coordinate network path construction. However, in a centralized control plane paradigm, one single (or at least logical) control plane would exist. This über brain would push commands to each device, thus commanding it to manipulate its physical switching and routing hardware. It is important to note that although the hardware that executed data planes of devices remained quite specialized, and thus expensive, the control plane continued to gravitate toward less and less expensive, general-purpose computing, such as those central processing units produced by Intel.

All of these aforementioned concepts are important, as they created the nucleus of motivation for what has evolved into what today is called software-defined networking (SDN). Early proponents of SDN saw that network device vendors were not meeting their needs, particularly in the feature development and innovation spaces. High-end routing and switching equipment was also viewed as being highly overpriced for at least the control plane components of their devices. At the same time, they saw the cost of raw, elastic computing power diminishing rapidly to the point where having thousands of processors at one’s disposal was a reality. It was then that they realized that this processing power could possibly be harnessed to run a logically centralized control plane and potentially even use inexpensive, commodity-priced switching hardware. A few engineers from Stanford University created a protocol called OpenFlow that could be implemented in just such a configuration. OpenFlow was architected for a number of devices containing only data planes to respond to commands sent to them from a (logically) centralized controller that housed the single control plane for that network. The controller was responsible for maintaining all of the network paths, as well as programming each of the network devices it controlled. The commands and responses to those commands are described in the OpenFlow protocol. It is worth noting that the Open Networking Foundation (ONF) commercially supported the SDN effort and today remains its central standardization authority and marketing organization. Based on this basic architecture just described, one can now imagine how quickly and easily it was to devise a new networking protocol by simply implementing it within a data center on commodity priced hardware. Even better, one could implement it in an elastic computing environment in a virtual machine.

A slightly different view of SDN is what some in the industry refer to as software-driven networks, as opposed to software-defined networks. This play on words is not meant to completely confuse the reader, but instead highlight a difference in philosophy of approaches. In the software-driven approach, one views OpenFlow and that architecture as a distinct subset of functionality that is possible. Rather than viewing the network as being comprised of logically centralized control planes with brainless network devices, one views the world as more of a hybrid of the old and the new. More to the point, the reality is that it is unrealistic to think that existing networks are going to be dismantled wholesale to make way for a new world proposed by the ONF and software-defined networks. It is also unrealistic to discard all of the advances in network technology that exist today and are responsible for things like the Internet. Instead, there is more likely a hybrid approach whereby some portion of networks are operated by a logically centralized controller, while other parts would be run by the more traditional distributed control plane. This would also imply that those two worlds would need to interwork with each other.

It is interesting to observe that at least one of the major parts of what SDN and OpenFlow proponents are trying to achieve is greater and more flexible network device programmability. This does not necessarily have anything to do with the location of the network control and data planes; however, it is concerned with how they are programmed. Do not forget that one of the motivations for creating SDN and OpenFlow was the flexibility of how one could program a network device, not just where it is programmed. If one observes what is happening in the SDN architecture just described, both of those questions are solved. The question is whether or not the programmability aspect is the most optimal choice.

To address this, individuals representing Juniper, Cisco, Level3, and other vendors and service providers have recently spearheaded an effort around network programmability called the Interface to the Routing System (I2RS). A number of folks from these sources have contributed to several IETF drafts, including the primary requirements and framework drafts to which Alia Atlas, David Ward, and Tom have been primary contributors. In the near future, at least a dozen drafts around this topic should appear online. Clearly there is great interest in this effort. The basic idea around I2RS is to create a protocol and components to act as a means of programming a network device’s routing information base (RIB) using a fast path protocol that allows for a quick cut-through of provisioning operations in order to allow for real-time interaction with the RIB and the RIB manager that controls it. Previously, the only access one had to the RIB was via the device’s configuration system (in Juniper’s case, Netconf or SNMP). The key to understanding I2RS is that it is most definitely not just another provisioning protocol; that’s because there are a number of other key concepts that comprise an entire solution to the overarching problem of speeding up the feedback loop between network elements, network programming, state and statistical gathering, and post-processing analytics. Today, this loop is painfully slow. Those involved in I2RS believe the key to the future of programmable networks lies within optimizing this loop. To this end, I2RS provides varying levels of abstraction in terms of programmability of network paths, policies, and port configuration, but in all cases has the advantage of allowing for adult supervision of said programming as a means of checking the commands prior to committing them. For example, some protocols exist today for programming at the hardware abstraction layer (HAL), which is far too granular or detailed for the network’s efficiency and in fact places undue burden on its operational systems. Another example is providing operational support systems (OSS) applications quick and optimal access to the RIB in order to quickly program changes and then witness the results, only to be able to quickly reprogram in order to optimize the network’s behavior. One key aspect around all of these examples is that the discourse between the applications and the RIB occur via the RIB manager. This is important, as many operators would like to preserve their operational and workflow investment in routing protocol intelligence that exists in device operating systems such as Junos or IOS-XR while leveraging this new and useful programmability paradigm to allow additional levels of optimization in their networks.

I2RS also lends itself well to a growing desire to logically centralize routing and path decisions and programmability. The protocol has requirements to run on a device or outside of a device. In this way, distributed controller functionality is embraced in cases where it is desired; however, in cases where more classic distributed control is desired, we are able to support those as well.

Finally, another key subcomponent of I2RS is normalized and abstracted topology. Defining a common and extensible object model will represent this topology. The service also allows for multiple abstractions of topological representation to be exposed. A key aspect of this model is that nonrouters (or routing protocol speakers) can more easily manipulate and change the RIB state going forward. Today, nonrouters have a major difficulty getting at this information at best. Going forward, components of a network management/OSS, analytics, or other applications that we cannot yet envision will be able to interact quickly and efficiently with routing state and network topology.

So, to culminate these thoughts, it is appropriate that we define SDN for what we think it is and will become:

Software-defined networks (SDN): an architectural approach that optimizes and simplifies network operations by more closely binding the interaction (i.e., provisioning, messaging, and alarming) among applications and network services and devices, whether they be real or virtualized. It often is achieved by employing a point of logically centralized network control—which is often realized as an SDN controller—which then orchestrates, mediates, and facilitates communication between applications wishing to interact with network elements and network elements wishing to convey information to those applications. The controller then exposes and abstracts network functions and operations via modern, application-friendly and bidirectional programmatic interfaces.

So, as you can see, software-defined, software-driven, and programmable networks come with a rich and complex set of historical lineage, challenges, and a variety of solutions to those problems. It is the success of the technologies that preceded software- defined, software-driven, and programmable networks that makes advancing technology based on those things possible. The fact of the matter is that most of the world’s networks—including the Internet—operate on the basis of IP, BGP, MPLS, and Ethernet. Virtualization technology today is based on the technologies started by VMware years ago and continues to be the basis on which it and other products are based. Network attached storage enjoys a similarly rich history.

I2RS has a similar future ahead of it insofar as solving the problems of network, compute, and storage virtualization as well as those of the programmability, accessibility, location, and relocation of the applications that execute within these hyper virtualized environments.

Although SDN controllers continue to rule the roost when it comes to press, many other advances have taken place just in the time we have been writing this book. One very interesting and bright one is the Open Daylight Project. Open Daylight’s mission is to facilitate a community-led, industry-supported open source framework, including code and architecture, to accelerate and advance a common, robust software-defined networking platform. To this end, Open Daylight is hosted under the Linux Foundation’s umbrella and will facilitate a truly game changing, and potentially field-leveling effort around SDN controllers. This effort will also spur innovation where we think it matters most in this space: applications. While we have seen many advances in controllers over the past few years, controllers really represent the foundational infrastructure for SDN- enabled applications. In that vein, the industry has struggled to design and develop controllers over the past few years while mostly ignoring applications. We think that SDN is really about operational optimization and efficiency at the end of the day, and the best way to achieve this is through quickly checking off that infrastructure and allowing the industry to focus on innovating in the application and device layers of the SDN architecture.

This book focuses on the network aspects of software-defined, software-driven, and programmable networks while giving sufficient coverage to the virtualization, location, and programming of storage, network, and compute aspects of the equation. It is the goal of this book to explore the details and motivations around the advances in network technology that gave rise to and support of hyper virtualization of network, storage, and computing resources that are now considered to be part of SDN.

References

[1] Thomas D. Nadeau and Ken Gray, SDN: Software Defined Networks, O’Reilly Media, Inc., 2013.


comments powered by Disqus