Monday, February 25, 2008

Bandwidth-On-Demand, What's up?

There has been a lively debate lately on the benefits or fallacy of Bandwidth-on-Demand (BoD) schemes. The bottom line is that this is a debate that has been going on since the beginning of time for all telecommunications services and it comes right down to a simple proposition: What is the fraction of the system bandwidth that is needed to meet the customer's requirement.

Let's dissect what this means. Once upon a time, the telephone represented the peak in technology for providing communications between two points and initially this represented a kind of BoD as customer could request, first through an operator request and then via a "dial", to get a circuit setup between two points. We then moved on to data communications technologies such as ISDN, X.25, and Frame Relay. Along with the Public Switched Telephone Network, these technologies enabled customers to setup defined bandwidth between customer end-points.

The reason for this allocation is straight forward, the desired allocation by a customer represented a significant fraction of the available system bandwidth. Because of this, any pure statistical best-effort system would lead to unacceptable performance to the customer. The idea of a traffic contract and end-to-end allocation of guaranteed bandwidth made it much easier to convince the customer that an end-to-end dedicated circuit, for example a switched T1 or nx64 Kbps ISDN circuit, was not necessary to ensure their application would work.

The Internet, for the most part, reflects a different approach which is the kill the problem with backbone bandwidth and ensure that user requirements are a small fraction of the network's capabilities. This all worked except that the Research and Education community has traditionally been the driving force not only in technology, but also in bandwidth use. It appeared, until the last couple of years, that the R&E community would continue this role, with networks and application requirements that spanned over 20 to 40 Gbps of nationwide cross-sectional bandwidth. As I stated in my earlier posts, the Internet bust is now over and Qwest, as well as other carriers are now faced with Internet bandwidth requirements that are growing at over 50% or more per year. So, what stagnated as a typical Tier 1 20 Gbps nationwide backbone for years is now adding that capacity every month if not significantly more - and it is accelerating.

What this all means is that public network infrastructure is growing at a huge compounded rate, and unless the R&E's requirements are growing at the same rate, the fraction of the resources of the commercial network infrastructure that R&E networks occupy will rapidly diminish. In fact, if this continues, then R&E networks will be a set of large but relatively common customers. This is true at the optical and data layers.

So, the bottom line is that commercial bandwidth capabilities are exploding and at some time in the near future R&E traffic may only be a large bump in the road leading to the conclusion that BoD (or switched multi-gigabit pipes) are not necessary. However, this is not the last word, as there are reasons other than cost that drives to creating special network services.

As I do with our customers, it is up the members of the R&E community to identify these special requirements and make an argument about why commercial network technology, or services, is not keeping pace with their needs. There are many potenial reasons, but over time the threshold for a real difference may continue to rise.

2 comments:

Bstarn said...

Wes:

Excellent analysis. Although there will be a few exceptions in really BIG science such as LHC, we are already seeing a significant migration of research traffic to the commercial Internet driven by the need to access services like Amazon Ec2/S2. Microsoft's upcoming cloud announcement is likely to further drive this trend. The bulk of traffic (up to 70%) on most R&E networks is no different than what is carried on commercial networks i.e. P2P, YouTube etc. Only a small percentage can be classified as true research and/or education traffic.
However I think R&E networks can still play an important role in exploring new business models and architectures such as helping addressing challenges of global warming. See http://green-broadband.blogspot.com
Bill

Wesley said...

Bill,
Thanks for taking the time to read my Blog. As you pointed out, the need to "mash up" with different data sources from all over the world changes the game. The difficulty in creating a network providing both specialized services for the R&E community and being well peered to the commodity Internet is a significant challenge to be cost effective. The reason for this is the straight forward Internet economics of today. Although there are exceptions, if are running a network and you are not going to provide nearly symmetric services for the networks to which you want to peer, then the large Internet providers are not going to peer with you as they are going to want to charge for the transit services they provide. This means that you are going to have to pay for both the R&E backbone and for the Internet services. You might has well just pay for the Internet services and use the service provider's backbone which consists of dozens of 10G and now 40G circuits.

That is not to say that special networks, such as the LHC and other supercomputer networks don't make sense. However, for applications that are not as demanding, a periodic evaluation of costs (e.g., 10G Wavelengths and Internet access) is required.

As a teaser, my next Blog entry is going to address network efficiency. We need to start to develop Watts/Gigabit switched or routed and explore why types of architectures result.

Wes