You are on page 1of 37

DAY ONE: AN INTRODUCTION TO SOFTWARE DEFINED NETWORKS (SDN)

View in A dobe Acrobat t o hear audio file s. Click videos to open in YouTube .

What is SDN? How did it get here? Where is it going?

by Thomas D. Nadeau and Dan Backman

Foreword

Welcome to this Day One eBook, an edition thats been enhanced to highlight its exciting subject matter. And to be sure, SDN is one of the most interesting technological shifts weve seen in a long time. This eBook introduces SDN to the lay reader: What is it? How did it evolve? Where is it going? Tom and Dan do an excellent job of covering the key concepts and framing the issues in front of the technology, with more detail and use cases coming in Toms book from OReilly Media during the summer of 2013. I have a few thoughts about SDNs importance to the entire networking industry. Swipe the page to learn more. Michael Beesley CTO, Platform Systems Division, Juniper Networks
Buckingham Fountain, Chicago

Michael Beesley on SDN in different use cases.

What does SDN mean for the Service Provider?

What does SDN mean for the Enterprise?

What does SDN mean for the Core?

What does SDN mean for the Data Center? 2

Preface

The rst part of this eBook is a bit of a brief history of networking. Sorry if you took part in any of it and dont need the review, but a brief history lesson helps when tackling a concept that gets a lot of hype. Dan and I are going to approach SDN on several levels. I created a timeline of pertinent networking events to bring us up to today, and Dan created a series of talks about the timeline and SDN in general. We brought in a few guests to offer their erudite forecasts and borrowed a few commercials and photographs to entertain you along the way. Thanks to our Editor-in-Chief for producing all the enhancements! Swipe to meet Dan and me. - Tom Nadeau
Calgary

abc

Text in this column consists of Toms timeline of events leading up to SDN, some of which you may already know about or have experienced. The timeline provides a consistent backdrop for all the other things happening on each page. This timeline text follows the progression of the alphabetic folio at the top of the page all pages have a numerical folio as circled in the lower right hand corner. The eBook has several video formats: Author Q& A, Whiteboard Tech Talks, and Guest Opinions. Dan is your video host while Tom joins us from New Hampshire, courtesy of the network.

Copyright
2012 by Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, and Junos, are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners. Opinions in this book do not necessarily align with the business strategies of Juniper Networks, Inc. The cityscape photographs in this book are works in the public domain at WikiMedia Commons. The authors thank the photographers for the opportunity to use and admire their photographs. The authors wish to thank the following individuals for their assistance and participation: Shane Amante, Michael Bushong, Nils Swart, Smita Deshpande, Michael Beesley, Nancy Koerbel, Wendy Cartee, Ken Gray, Rob Hays, Mike Marcellin, Patrick Ames, and our peers and management at Juniper Networks. Published and Produced by Juniper Networks Books Editor in Chief: Patrick Ames Copyeditor: Nancy Koerbel

Version History: Nov 2012 2 3 4 5 6 7 8 9 10


#71001161-en www.juniper.net/books

SDN: Software Dened, Driven, and Programmable Networks, by Thomas D. Nadeau, will be published in 2013 by OReilly Media, as part of the Juniper Networks Technical Library.

Day One enhanced: An Introduction to Software Dened Networks (SDN)

by Thomas D. Nadeau and Dan Backman

If you thought the past decade played witness to some fantastic technological achievements, wait for the twentyteens. The decade from 2012 through 2022 promises to be one of those inection points when the global populace and global network capacity meet. What exactly will happen is uncertain, but software dened networks (SDN), software driven networks, and programmable networks are sure to play a big part. But lets rst look at a few events that happened in the previous decade and work our way up to today and the future.

Shanghai

It was only a few years ago that storage, computing, and network resources were intentionally kept physically and operationally separate from each other. Even the systems used to manage those resources were often physically separate. Applications that interacted with any of these resources, such as an operational monitoring system, were also kept at arms length, and often required signicantly involved access policies, systems, and access procedures, all in the name of security. IT departments liked it this way. And really, it was only after the introduction (and the demand) for inexpensive computing power, storage, and networking in data center environments that organizations were forced to bring these different elements together. This paradigm shift also brought applications that manage and operate these resources much closer together than ever before.

Data centers were originally designed to physically separate traditional computing elements (for example, PC servers), their associated storage, and the networks that interconnected them with client users. The computing power that existed in these types of data centers became focused on specic server functionality running applications such as mail servers, database servers, or other such widely used functionality in order to serve desktop clients. Previously those functions were often executed on thousands (or more) desktops within an enterprise organization, and were used as departmental servers that provided services dedicated only to local use. As time went on, the departmental servers migrated to the data center for a variety of reasons, the most important of which was facilitating ease of management. Of secondary importance was sharing those services with all enterprise users.

Q&A: Network Engineers and Programmability

Around ten years ago an interesting transformation took place. A company called VMware invented an interesting technology that allowed a host operating system, like the popular Linux distributed OS, to execute one or more client operating systems (Windows, for example). VMware developed a small program to create a virtual environment that synthesized a real computing environment (for example, virtual NIC, BIOS, sound adapter, video, etc.). It then marshaled real resources between the virtual machines. This supervisory program was called a hypervisor.

How do you extend virtualization into the network?

More about hypervisors...

Tom Nadeau: Virtual Machines and SDN

Originally, VMware was designed for engineers who wanted to run Linux for most of their computing needs, and to run Windows (which was the corporate norm at the time) only for those situations that required the specialized operating system environment to execute. When they were nished, they would simply close Windows as if it were another program, and continue on with Linux. This had the interesting effect of allowing one to treat the client operating system as if it were just a program consisting of a le (albeit a large one) that existed on their hard disk. That le could be manipulated the same as any other le; that is, it could be moved or copied to other machines and executed there as if it were running on the machine on which it was originally installed. Even more interestingly, the operating system could be paused without its knowing, essentially causing it to enter into a state of suspended animation.

Virtualized Servers

What about Big Data and SDN?

10

London

Section 1

Commercial Break
Crash

When will this be reality? 2022? 2018? 2014?

12

With the advent of operating system virtualization, the servers that typically ran a single, dedicated operating system, such as Microsoft Windows Server, and the applications specically tailored for that operating system, could now be viewed as an ubiquitous computing and storage platform. With the further advances and increases in memory, computing, and storage, data center compute servers were increasingly capable of executing a variety of operating systems simultaneously in a virtual environment. VMware expanded its single-host version to a more data center-friendly environment that was capable of executing and controlling many hundreds or thousands of virtual machines from a single console. Operating systems such as Windows Server that previously occupied an entire bare metal machine were now executed as virtual machines, each running whatever applications client users demanded. The only difference was that each was executing in its own self-contained container that could be paused, relocated, cloned, or copied (as a backup). Thus began the age of elastic computing.

Im the physical object...

... and Im the virtual container.

13

Within the elastic computing environment, operations departments were able to move servers to any physical data center location simply by pausing a virtual machine, and copying a le. They could even spin up new virtual machines by simply cloning the same le and telling the hypervisor to execute it as a new instance. This exibility allowed network operators to start optimizing the data center resource location, thus utilization was based on metrics such as power and cooling. By packing together all active machines, an operator could turn down cooling in another part of a data center by sleeping or idling entire banks or rows of physical machines, thus optimizing the cooling load on a data center. Similarly, an operator could move or dynamically expand computing, storage, or network resources by geographical demand.

Elastic Computing

Is SDN the next big thing?

14

Los Angeles

As with all advances in technology, this newly discovered exibility in operational deployment of computing, storage, and networking resources brought about a new problem: one of operational efciency both in terms of maximizing the utilization of storage and computing power, but also in terms of power and cooling. As mentioned earlier, network operators began to realize that, in general, computing power demand increased over time. To keep up with this demand, IT departments who typically budget on a yearly basis would pre-order all the equipment they predicted would be needed for the following year. However, once this equipment arrived and was placed in racks, it would consume power, cooling, and space resources even if it was not yet used! This dilemma was discovered early on at Amazon. At the time, Amazons business was growing at the rate of a hockey stick graph doubling every 6-9 months. As a result, growth had to stay ahead of demand for the computing services that served their retail ordering, stock and warehouse management systems, and internal IT systems. Amazons IT department was then forced to pre-order large quantities of storage, network and computing resources in advance, but

Changing the Amount of Friction

faced the dilemma of having that equipment sit idle until the demand caught up with those resources. Amazon Web Services (AWS) was invented as a way to commercialize this unused resource pool so that it would be utilized at a rate closer to 100%. When internal resources needed more resources, they would simply push off retail users, and when they were not needed, retail compute users could use up the unused resources. Some call this elastic computing services this eBook calls it hyper virtualization.
16

It was only when companies like Amazon, Rackspace, and others, who were buying storage and computing in huge quantities for pricing efciency, realized they were not efciently utilizing all of their computing and storage that they began to resell their spare computing power and storage to external users in an effort to recoup some of their capital investments. This gave rise to a multi-tenant data center. Which, of course, created a new problem: how do you separate potentially thousands of tenants, whose resources need to be arbitrarily spread across the virtual machines of different physical data centers?

Data Center Multi-Tenancy

Tom Nadeau: Separating Data Center Tenants

17

Another way to understand this dilemma is to note that during the move to hyper virtualized environments, execution environments were generally run by a single enterprise or organization. That is, they typically owned and operated all of the computing and storage (although some rented co-location space) as if they were a single, at, local area network (LAN) interconnecting a large number of virtual or physical machines and network attached storage. (The exception was in nancial institutions where regulatory requirements mandated separation.) However, the number of departments in these cases was relatively small fewer than 100 and so this problem was easily solved using existing tools such as MPLS Layer 2 or Layer 3 VPNs. In both cases, though, the network components that linked up all of the computing and storage resources until then were rather simplistic; it was generally a at Ethernet LAN that connected all of the physical and virtual machines. Most of these environments assigned IP addresses to all of the devices (virtual or physical) in the network from a single network (perhaps with IP subnets), as a single enterprise because they owned the machines and needed access to them. This also meant that it was generally not a problem

Will SDN change the economics of networking?

moving virtual machines between different data centers located within that enterprise, again, because they all fell within the same routed domain and could reach each other regardless of physical location.
18

In a multi-tenant data center, computing, storage, and network resources can be offered in slices that are independent or isolated from one another. It is, in fact, critical that they are kept separate. This poses some interesting challenges that were not present in the single tenant data center environment of the past. Keep in mind that their environment allowed for the execution of any number of operating systems and applications on top of those operating systems, but each needed a unique network address if it were to be accessed by its owner, or other external users such as a customer. In the past, addresses could be assigned from a single, internal block of possibly private addresses, and easily routed internally. Now, however, you needed to assign unique, externally routable and accessible addresses. Furthermore, consider that each virtual machine in question had a unique Layer 2 address as well. When a router delivers a packet, it ultimately has to deliver a packet using Ethernet (not just IP). This is generally not an issue until you consider virtual machine mobility (VM mobility). In these cases, virtual machines are relocated from their current physical location to another, possibly distant, one for power, cooling, or computing compacting

What happens when networks start moving?

reasons. Herein lies the rub, because physical relocation means physical address relocation. It also means possible changes to Layer 3 routing in order to ensure packets destined for that machine, in its original location, can now be changed to its new location.
19

Warsaw

At the same time data centers were evolving, network equipment seemed to stand still in terms of innovations beyond feeds and speeds. That is, beyond the steady increase in switch fabric capacities and interface speeds, data communications had not evolved much since the advent of IP, MPLS, and mobile technologies. IP and MPLS allowed a network operator to create networks and virtual network overlays on top of those base networks in much the same way as data center operators were able to create virtual machines to run over physical ones with the advent of computing virtualization. Network virtualization was generally referred to as virtual private networks (VPN), and came in a number of avors from point-to-point (for example, a personal VPN that you might run on your laptop and connect to your corporate network), Layer-3 (which virtualized an IP or routed network to allow a network operator to securely host an enterprise in a manner that isolated their trafc from another enterprise), or Layer-2 VPNs (switched network virtualization that isolated in a manner similar to a layer-3 VPN, except that the addresses used were Ethernet).

Automating the Workow

How important is Open Source in SDN?

21

Commercial routers and switches typically come with management interfaces that allow a network operator to congure and otherwise manage these devices. Some examples of management interfaces are a command line interface (CLI), XML/Netconf, web graphical user interface (GUI), or the Simple Network Management Protocol (SNMP). While many of these interfaces allow an operator suitable access to a devices capabilities, they still often hide the lowest level of details from the operator. For example, network operators can program static routes or other static forwarding entries, but those requests are ultimately passed through the devices operating system. This is generally not a problem until one wants to program using the syntax or semantics of functionality that exist in a device. If someone wishes to experiment with some new routing protocol, its not possible on a device where the rmware has not been written to support that protocol. In such cases, it was customary for a customer to make a feature enhancement request of a device vendor, and then typically wait some amount of time for results (several years was not out of the ordinary).

Is SDN a disruptive force?

22

At the same time, the concept of a distributed (at least logically) control plane came back onto the scene. A network device is comprised of a data plane that is often a switch fabric connecting the various network ports on a device, and a control plane that is the brains of a device. For example, routing protocols that are used to construct loop-free paths within a network are most often implemented in a distributed manner. That is, each device in the network has a control plane that implements the protocol. These control planes communicate with each other to coordinate network path construction. However, in a centralized control plane paradigm, one single (or at least logical) control plane would exist. This uber brain would push commands to each device, thus commanding them to manipulate their physical switching and routing hardware. It is important to note that while the hardware that executed data planes of devices remained quite specialized, and thus expensive, the control plane continued to gravitate towards less and less expensive general purpose computing such as those central processing units produced by Intel.

Guest Audio Interview: Shane Amante

Shane Amante, Level 3 Communications, Backbone - IP Architecture Group

Q&A: Dene SDN-like.

23

All of the aforementioned concepts are important as they created the nucleus of motivation for what was called Software Dened Networking (SDN). Early proponents of SDN saw that network device vendors were not meeting their needs, particularly in the feature development and innovation spaces. They were also viewed as being highly overpriced, at least for the control plane components of their devices. At the same time, they saw the cost of raw, elastic computing power rapidly diminishing to the point where having thousands of processors at ones disposal was a reality. It was then realized that this processing power could possibly be harnessed to run a logically centralized control plane, and could even potentially be used for inexpensive commodity-priced switching hardware. A few engineers from Stanford University created a protocol called Open Flow that could be implemented in just such a conguration. Open Flow was architected for a number of devices containing only data planes to respond to commands sent to them from a (logically) centralized controller that housed the single control plane for that network. The controller was responsible for maintaining all of the network paths, as well as programming each of the

Michael Beesley Keynote Toyko Interop 2012

network devices it controlled. The commands and responses to those commands are described in the Open Flow protocol. It is worth noting that The Open Networking Foundation (ONF) commercially supported the SDN effort and today remains its central standardization authority and marketing organization. Based on the basic architecture just described, one can now imagine how quickly and easily one could devise a new networking protocol by simply implementing it within a data center on commodity priced hardware. Even better, one could implement it in an elastic computing environment in a virtual machine.
24

Boston

It is interesting to observe that at least one major part of what SDN and Open Flow proponents are trying to achieve is greater and more exible network device programmability. This does not necessarily have anything to do with the location of the network control and data planes; it is concerned, however, with how they are programmed. Do not forget that one of the motivations for creating SDN and Open Flow was the exibility of how one could program a network device, not just where it is programmed. If one observes what is happening in the SDN architecture described above, both of those questions are solved. The next question is whether or not the programmability aspect is the most optimal choice.

Whats the role of OpenFlow in SDN?

Q&A: I have a job running a network. Should I be worried?

26

Juniper Networks has recently spearheaded a critical new effortaroundnetwork programmability called the Interface to the Routing System (IRS). A number of folks inside Juniper, including Alia Atlas, Bruno Rijsman, Hannes Gredler, and myself, have contributed to a number of IETF drafts, including the primary requirements and framework drafts. In the near future at least a dozen drafts around this topic should appear online. Clearly there is great interest in this effort. The basic idea around IRS isto create a protocol and components toact as a means of programming a network devices Routing Information Base (RIB) using a fast path protocolthat allows for a quick cut-through of provisioning operations in order to allow for real-time interaction with the RIB and the RIB manager that controls it. Previously, the only access one had to the RIB was via the devices conguration system (in Junipers case Netconf or SNMP).

The key elements of SDN.

27

The key to understanding IRS is that it is most denitely not just another provisioning protocol; thats because there are a number of other key concepts that comprise an entire solution to the overarching problem of speeding up the feedback loop between network elements, network programming, state and statistical gathering, and postprocessing analytics. Today this loop is painfully slow. Those involved in IRS believe the key to the future of programmable networks lies within optimizing this loop.
IRS SLIDE DECK by Tom Nadeau

Q&A: What are the differences between Openow and IRS?

28

Section 2

Commercial Break
Crowd Casting

Software dened, driven, and programmable networks can rock.


29

To this end, IRSprovides varying levels of abstraction in terms of programmability of network paths, policies, and port conguration, but in all cases it has the advantage of allowing for adult supervision of said programming as a means of checking the commands prior to committing them. For example, today some protocols exist for programming at the hardware abstraction layer (HAL), which is far too granular or detailed for a networks efciency and, in fact, places undue burden on its operational systems. Another example is providing OSS applications quick and optimal access to the RIB in order to quickly program changes, and then witness the results, only to be able to quickly reprogram in order to optimize the networks behavior. One key aspect around all of these examples is that the discourse between the applications and the RIB occurs via the RIB Manager. This is important, as many operators would like to preserve their operational and work-ow investment in routing protocol intelligence that exists in Junos, while leveraging this new and useful programmability paradigm to allow additional levels of optimization in their networks.

Is Juniper Networks positioned for success in SDN?

30

Bogota

IRS also lends itself well to a growing desire to logically centralize routing and path decisions and programmability. The protocol has requirements to run on a device, or outside of a device. In this way, distributed controller functionality is embraced in cases where it is desired; however, in cases where more classic distributed control is desired, it is able to support those as well.

Guest Opinion: Michael Beesley

CTO, Platform Systems Division, Juniper Networks

SDN means evolving the network. Tom Nadeau: Where does the control plane exist?

32

Another key sub-component of IRS is normalized and abstracted topology. Dening a common and extensible object model will represent this topology. The service also allows for the exposure of multiple abstractions of topological representation. A key aspect of this model is that non-routers (orrouting protocol speakers) can more easily manipulate and change the RIBstate going forward. Today, non-routers have difculty getting at this information. Going forward, applications such as network management/ OSS, analytics, or other applications we cant yet envision, will be able to interact quickly and efciently with routing state and network topology.

Whats the future of SDN?

Tom Nadeau: The interface to the RIB.

33

So, as you can see, Software Dened, Driven, and Programmable Networks come with a rich and complex set of historical lineage, challenges, and a variety of solutions to those problems. It is the success of the technologies that preceeded Software Dened, Driven and Programmable Networks that makes technology based on those advances possible. The fact of the matter is that most of the worlds networks, including the Internet, operate on a basis of IP, BGP, MPLS, and Ethernet. Virtualization technology today is based on the technologies started by VMware years ago, and continues to be the basis for it and other products. Network attached storage enjoys a similarly rich history.

Guest Opinion: Rob Hays

General Manager, Datacenter Strategic Planning, Intel Corporation

Guest Opinion: Ken Gray

IRS has a similar future solving the problems of network, compute, and storage virtualization as well as the programmability, accessibility, location and re-location of the applications that work within these hyper-virtualized environments.
Senior Director, Ofce of CTO, Platform Systems Division, Juniper Networks 34

Resources
Further Reading and Resources:
http://en.wikipedia.org/wiki/Software-dened_networking http://opennetsummit.org http://tiny-tera.stanford.edu/~nickm/talks/index.html http://www.youtube.com/watch?v=PbFaokQIREw http://www.juniper.net/us/en/local/pdf/solutionbriefs/3510473-en.pdf

Blogs and Discussions:


http://www.sdncentral.com http://forums.juniper.net/t5/var-blog-Michael-Bushong/bg-p/varblog http://forums.juniper.net/t5/OpenFlow-and-other-network/bg-p/OpenFlow http://forums.juniper.net/t5/Occupy-SDN-and-Programmability/bg-p/ OccupySDN

IETF Internet-Drafts on IRS:


http://datatracker.ietf.org/doc/draft-atlas-irs-problem-statement/ http://datatracker.ietf.org/doc/draft-ward-irs-framework/ http://datatracker.ietf.org/doc/draft-dimitri-irs-arch-frame/ http://datatracker.ietf.org/doc/draft-rfernando-irs-framework-requirement http://datatracker.ietf.org/doc/draft-atlas-irs-policy-framework/ http://datatracker.ietf.org/doc/draft-white-irs-use-case/ http://datatracker.ietf.org/doc/draft-amante-irs-topology-use-cases/ http://datatracker.ietf.org/doc/draft-medved-irs-topology-requirements/ http://datatracker.ietf.org/doc/draft-keyupate-irs-bgp-usecases/ 35

SDN: Software Dened, Driven, and Programmable Networks, by Thomas D. Nadeau, will be published in 2013 by OReilly Media, as part of the Juniper Networks Technical Library.

You might also like