Saturday, March 29, 2008

Earth Hour for the Internet

Yesterday was the first Earth Hour day where we are supposed to reduce energy consumption by turning off the lights for one hour in a worldwide energy conservation effort. There has been talk about the Internet being part of the solution for saving the planet (e.g., by enabling telecommuting), but is the infrastructure of the Internet really following "Green" principles?

Let's explore some of the places where energy is used in creating Internet services and get a view about how green they are relative to where they could be. First, there is the optical transport layer. This is the layer that provides both the "wavelengths" that are used to create the backbone of the Internet. It also provides the wavelengths are are used to create optical rings for high-availability SONET-based private lines and Ethernet needed to backhaul customer traffic to Internet edge routers.

Efficient use of power for these systems has always been a design goal and customer criteria. The reason for this is quite simple. Components of these systems (e.g., terminals and optical amplifiers) are located at hundreds of facilities that are in many cases literally in the middle of nowhere (and in some cases, 60 miles from nowhere on the way to nowhere). Controlling power consumption is critical to creating high-availability services, as the power needed by these systems determines the maximum run-time on batteries and generators.

The next place, and the place that today uses more and more power is at the data layer. Whereas optical transport systems may have power consumption of around two to four kilowatts per rack, high-end core routers have power consumptions in the range of 10 kilowatts per rack. Of course, it is not only the power that matters. Virtually every kilowatt of power input to a router becomes heat that must be removed using HVAC systems that use yet more power. Today, the leading router vendors With routers having approximately 100 10Gbps ports per rack yielding about 100 watts per port.

With the rapid growth of the Internet, each additional 10G of cross sectional bandwidth of the network consumes a significant amount of power. Are we doing everything that can be done to reduce the power consumption of the Internet?

First, let's look at the optical transport system. The major power consumption in the optical transport system is the amplification that takes place every 100 kilometers. Technologies exist today that can reduce the number of amplifiers. These include new optical fibers with less attenuation leading to less span loss making is possible to skip existing amplifier locations.

Second, let's examine the core of an Internet backbone network. For most Tier 1 providers is it comprised of approximately 20 core locations throughout the United States. To add 10Gbps of backbone bandwidth across the network, and retaining the typical goal of keeping any route through the network to at most three core routers, you have to add on the order of 100 backbone circuits. This means there are over 200 ports need to be added, or over 20 kilowatts of power (not including cooling requirements). With the explosive growth of Internet traffic (discussed in a previous post), this means that power consumption is growing right along with traffic. With growth today at over 50% compounded annual growth rate in traffic, we are talking about a hot topic.

As with optical transport, there are possibilities in reducing power needed in the Internet core. Most carriers today use full-blown routers (e.g., Cisco CRS-1 and Juniper T-series) to provide backbone MPLS switching and IP routing services. The general reason for this is that these platforms have significant features and proven reliability needed to create a robust and highly-availability network.

The obvious question is whether there is another core architecture that can provide the same backbone capabilities, but do it with less power. The short answer is yes, but the longer answer still requires some additional evaluation. One approach is to use Ethernet switching instead of router-based MPLS. From a power perspective, some of these devices use less than 20 watts per 10G port, or approximately 80% less electricity than a full-blown router.

However, there is no free lunch. There are reasons that high-performance routers have been used instead of Ethernet switches. These include technical features, operational issues, and robustness. Technical features include limitations on complex access control lists and rate limiting, which are tools that are commonly used to provide protection of network element control planes. Operational issues include the lack of comprehensive Ethernet OAM tools, making it difficult to perform fault detection and isolation, and to identify the root cause of poor performance. Finally, using Ethernet switches still requires backbone protection mechanisms that ensure high-availability backbone services. Today, much of this is done via MPLS Fast Re-Route, and there are Ethernet switches that provide this capability. There are other protection mechanisms, but they are either not robust enough or the technique is not proven on a nationwide scale. Other important features, such as hitless software and hardware upgrades, need improvement.

Finally, Business week in its March 20, 2008 magazine has a detailed article (also commented on by Bill St. Arnaud) about the issue of powering and cooling the data centers. It is these data centers where the applications that we know and love, such as Ebay, Google, YouTube, Yahoo, and others find there life. Finding "Green" locations, such as locations like Iceland with Geothermal power, and technologies to reduce power consumption is clearly on the minds of corporate executives eager to both reduce costs and make a little positive PR at the same time.

Apparently evident in the data center business, perhaps the most important question is whether there is an economic advantage for the major Internet providers to move towards a more power efficient transport of IP packets. There is always a significant amount of organizational and technical inertia that keeps network providers from radically changing their approach. However, with the cost of energy increasing and the rapid growth in Internet demand, the need for additional capital investment needed to keep pace may open up a significant opportunity to move towards both greener technology and greener architectures.

Monday, February 25, 2008

Bandwidth-On-Demand, What's up?

There has been a lively debate lately on the benefits or fallacy of Bandwidth-on-Demand (BoD) schemes. The bottom line is that this is a debate that has been going on since the beginning of time for all telecommunications services and it comes right down to a simple proposition: What is the fraction of the system bandwidth that is needed to meet the customer's requirement.

Let's dissect what this means. Once upon a time, the telephone represented the peak in technology for providing communications between two points and initially this represented a kind of BoD as customer could request, first through an operator request and then via a "dial", to get a circuit setup between two points. We then moved on to data communications technologies such as ISDN, X.25, and Frame Relay. Along with the Public Switched Telephone Network, these technologies enabled customers to setup defined bandwidth between customer end-points.

The reason for this allocation is straight forward, the desired allocation by a customer represented a significant fraction of the available system bandwidth. Because of this, any pure statistical best-effort system would lead to unacceptable performance to the customer. The idea of a traffic contract and end-to-end allocation of guaranteed bandwidth made it much easier to convince the customer that an end-to-end dedicated circuit, for example a switched T1 or nx64 Kbps ISDN circuit, was not necessary to ensure their application would work.

The Internet, for the most part, reflects a different approach which is the kill the problem with backbone bandwidth and ensure that user requirements are a small fraction of the network's capabilities. This all worked except that the Research and Education community has traditionally been the driving force not only in technology, but also in bandwidth use. It appeared, until the last couple of years, that the R&E community would continue this role, with networks and application requirements that spanned over 20 to 40 Gbps of nationwide cross-sectional bandwidth. As I stated in my earlier posts, the Internet bust is now over and Qwest, as well as other carriers are now faced with Internet bandwidth requirements that are growing at over 50% or more per year. So, what stagnated as a typical Tier 1 20 Gbps nationwide backbone for years is now adding that capacity every month if not significantly more - and it is accelerating.

What this all means is that public network infrastructure is growing at a huge compounded rate, and unless the R&E's requirements are growing at the same rate, the fraction of the resources of the commercial network infrastructure that R&E networks occupy will rapidly diminish. In fact, if this continues, then R&E networks will be a set of large but relatively common customers. This is true at the optical and data layers.

So, the bottom line is that commercial bandwidth capabilities are exploding and at some time in the near future R&E traffic may only be a large bump in the road leading to the conclusion that BoD (or switched multi-gigabit pipes) are not necessary. However, this is not the last word, as there are reasons other than cost that drives to creating special network services.

As I do with our customers, it is up the members of the R&E community to identify these special requirements and make an argument about why commercial network technology, or services, is not keeping pace with their needs. There are many potenial reasons, but over time the threshold for a real difference may continue to rise.