Not too long ago, computer hardware and software were bundled together. You picked your vendor (e.g., Univac, IBM, DEC, DG, etc.), and they provided the operating system and development tools. In these proprietary environments, you either developed your own applications, or you selected from a few software vendors that developed applications packages. Either way, the cost of these applications were huge as the development and maintenance expense had to be spread out over a relatively small base of users.
Enter the Personal Computer platform and operating systems that run on hardware provided by hundreds of companies. Now, the operating system and development environment costs dropped dramatically and the number of application developers increase substantially. Since the market for these application now numbered in the millions, applications software costs for common tasks dropped as well. These are the common applications in use today such as Web browsers, office automation software, email tools, databases, and the like.
At the same time, a new breed of software led by the Open Source movement also entered the software calculus. These software packages have become great bases for companies to develop their own software by incorporating their intellectual property in a cost effective manner. The proliferation of this software, running on Microsoft, Apple, and Linux-based operating systems has been dramatic with operations like Source Forge enabling a huge developer community to organize and share code development.
Now, you ask, how is this tied back to flexible networking devices? When looking at the purchase of a computer, one asks what applications can it run? and what tools can I use to customize the applications? and who is out there that can program my computer to my needs? The purchase of a several hundred thousand dollar high-performance switch or router fails on all of these points. These machines run one operating system, which is generally proprietary. They have one application image that is controlled by a single provider, and the only customization are those that are allowed based on configuring essentially standard internet protocol settings.
So, the bottom line is that you bought a supercomputer (albeit one with superior interface capabilities and I/O throughput) that you can not program, can not customize, and can not provide additional value to your organization (or, in the instance you are a service provider, your commercial customer base).
Times may be changing. With high-performance backplanes, availability of high-performance 1 GbE and 10 GbE interfaces, the time is upon us where the Open Source world can finally meet the network. Examples of this have recently hit the front page of the technology web world with rumors that Google has developed their own 10 GbE switch. However, this is not new, and leading network researchers such as Nick McKeown at Stanford University have developing a flexible development environment that enable complete customization of networking devices, at virtually all layers, with an extremely cost effective platform that still enables line-speed performance.
Where does this take us? You may ask, why do I want to have the flexibility to program my network switch or router? Well, there are two basic reasons. The first is innovation. Network technology at Layer 2 and 3 is essentially stuck in an oligarchy driven by standards organizations that have slowed innovation to a crawl. Combined with closed network hardware platforms, this means that the oligarchs give and we as network users take. With a change in the landscape to an Open Source and open platform environment, innovation will take hold and new concepts and approaches are sure to follow. A direct example of this is the proliferation of security-related Open Source software, such as Snort.
The second reason is that there remains much to be done in the innovation space. Once the driving technology push of network devices, such as routers, was speed and number of interfaces. Now, security issues in the Information Technology space (that is applications and data) are the driving issues. However, nothing has really been done to connect the network to the applications layer, as nobody is really able to make changes in the routers that should participate in security. That is, the network has no real clue on the types of objects being transported or the identity of the source of data communication requests. All sorts of devices, from firewalls and now Network Access Controllers are being designed and fielded, but the bottom line is that if we are going to solve network security issues then the network has to be a full participant in the establishment and policing of data flow through the network. If you can't get in the boxes to make changes, innovation in this space can only come from edge devices which may be limited in what they can achieve. We need openness to enable continued innovation in this space.
More discussion and ideas on how to get the network to participate in real security (not just point to point tunnels) in another post.
There are rapid changes in the the world of networking and information systems. This is true at many levels that start from technology, runs through applications, and the organizations that depend on information technology.
Tuesday, December 4, 2007
Sunday, December 2, 2007
Where do we go from here?
I have had the fortune of being asked to participate in some of the early National Science Foundation GENI meetings. In preparing material for invited talks, many issues on the state of internet technology started to become clearer to me and some of the members of my team at Qwest Government Services.
First, of course I had to start to understand what the motivations behind the development of a new infrastructure to provide a foundation for networking research in the coming decade. Most of the time many of us that are in the networking service provider business look at the need to increase the capacity of our networks. We worry about improving access to our customers through various means such as building new fiber, leveraging copper, or the bold wireless world. There are many other issues, but staying on the capacity theme, most of the time we add capacity based on the architecture of the network that was designed several years ago. This is usually a good idea as future demand increases usually look a lot like previous demand increases. This incrementalist approach is also fueled by the momentum of a deployed architecture. For most people that are not service providers, the moment cost of hardware is generally dwarfed by operations support systems, network management operations, and staff training.
So, why is this incremental approach in danger of becoming seriously incorrect? There are several reasons, but today we will focus and touch on just two. First, what is happening on the demand side? It only takes a moment to look at the events of the day to understand much of what is changing. Let's put it into click terms. Once upon a time in the not too distance past, a click on the web would return kilobits of data, mostly text and some compressed graphics. Just yesterday, this was transformed with peer-to-peer services for music exchange. This was a click and then a few megabytes. Now, this is all changing and changing quickly. No longer does a click have a defined event of a certain amount of data. Clicks now represent multi-megabit per second video distribution such as YouTube and on demand video downloads of television shows and feature-length movies. It also includes video and audio conferencing that once established stays on for hours at a time. In short, the computer that once was only powered on when dial-up to the Internet is now an appliance that is on 24 hours a day and soon to be engaged in Internet activities of similar duration.
This first issue really drives the second. That is the assumption the incremental increases in service provider backbone bandwidth can be added using the same incremental model that has been used in the past. Not that there have not been significant improvements in the technology used to provide the Internet. Moving to more cost effective wavelength transport services to provide the bandwidth between core routers has certainly improved this basic cost. However, the basic approach of adding increasing layers of backbone bandwidth to core routers to raise the bandwidth boat for all customers on the network, regardless of the type of the application (e.g., video distribution, VoIP, etc.) means that the network must grow everywhere to deal with this increased demand.
So, with an estimated approximately 70% compounded annual growth rate on Internet services, the cost associated with incremental bandwidth approaches may not work. You may ask why this does not work, but the answer is (at least how it looks today) is that the business model of the Internet does not drive adequate money into the infrastructure necessary to support this rapid growth (although it may for a time). What is certainly true is that virtually all major telecommunications customers expect that prices decrease over time for high-capacity services. Thus, either service providers have to increase the prices on their services, find a way to derive revenue from the application side of the Internet (e.g., Google, etc.), or have to prepare to radically change the technology and architecture of their Internet infrastructure to provide more for less capital and operations costs.
Clearly, some blend of these options for increasing revenue and reducing costs will have to take place. However, if the Internet is going to continue to be the foundation for new business ideas, we have to rapidly start planing for these changes to prevent having to catch-up and leaving the Internet as one big congested highway.
First, of course I had to start to understand what the motivations behind the development of a new infrastructure to provide a foundation for networking research in the coming decade. Most of the time many of us that are in the networking service provider business look at the need to increase the capacity of our networks. We worry about improving access to our customers through various means such as building new fiber, leveraging copper, or the bold wireless world. There are many other issues, but staying on the capacity theme, most of the time we add capacity based on the architecture of the network that was designed several years ago. This is usually a good idea as future demand increases usually look a lot like previous demand increases. This incrementalist approach is also fueled by the momentum of a deployed architecture. For most people that are not service providers, the moment cost of hardware is generally dwarfed by operations support systems, network management operations, and staff training.
So, why is this incremental approach in danger of becoming seriously incorrect? There are several reasons, but today we will focus and touch on just two. First, what is happening on the demand side? It only takes a moment to look at the events of the day to understand much of what is changing. Let's put it into click terms. Once upon a time in the not too distance past, a click on the web would return kilobits of data, mostly text and some compressed graphics. Just yesterday, this was transformed with peer-to-peer services for music exchange. This was a click and then a few megabytes. Now, this is all changing and changing quickly. No longer does a click have a defined event of a certain amount of data. Clicks now represent multi-megabit per second video distribution such as YouTube and on demand video downloads of television shows and feature-length movies. It also includes video and audio conferencing that once established stays on for hours at a time. In short, the computer that once was only powered on when dial-up to the Internet is now an appliance that is on 24 hours a day and soon to be engaged in Internet activities of similar duration.
This first issue really drives the second. That is the assumption the incremental increases in service provider backbone bandwidth can be added using the same incremental model that has been used in the past. Not that there have not been significant improvements in the technology used to provide the Internet. Moving to more cost effective wavelength transport services to provide the bandwidth between core routers has certainly improved this basic cost. However, the basic approach of adding increasing layers of backbone bandwidth to core routers to raise the bandwidth boat for all customers on the network, regardless of the type of the application (e.g., video distribution, VoIP, etc.) means that the network must grow everywhere to deal with this increased demand.
So, with an estimated approximately 70% compounded annual growth rate on Internet services, the cost associated with incremental bandwidth approaches may not work. You may ask why this does not work, but the answer is (at least how it looks today) is that the business model of the Internet does not drive adequate money into the infrastructure necessary to support this rapid growth (although it may for a time). What is certainly true is that virtually all major telecommunications customers expect that prices decrease over time for high-capacity services. Thus, either service providers have to increase the prices on their services, find a way to derive revenue from the application side of the Internet (e.g., Google, etc.), or have to prepare to radically change the technology and architecture of their Internet infrastructure to provide more for less capital and operations costs.
Clearly, some blend of these options for increasing revenue and reducing costs will have to take place. However, if the Internet is going to continue to be the foundation for new business ideas, we have to rapidly start planing for these changes to prevent having to catch-up and leaving the Internet as one big congested highway.
Subscribe to:
Posts (Atom)