Thursday, February 19, 2009

What to do with Government Telecom Money?

Now that the United States has now apparently allocated several billion dollars to improve the telecommunications infrastructure of the country, the real question is what to do with the money (assuming we do not want it to go to waste).  First, where could we spend the money?

  • Spreading WiMAX around the country, especially in rural areas
  • Increasing 3G wireless deployment to places where coverage is currently spotty
  • Improving wireline access capabilities

Each of the above, or a mix, would have a positive impact on network accessibility and should therefore enable current applications as well as novel applications to drive economic value.   However, there is only a limited amount of money, so trying to do too much in too many areas may spread the money too thinly and have very limited impact.  To avoid this, more concrete goals need to be established.

For any program, there has to be some guiding goals for success and principles on which to determine how to spend the money to achieve those goals.  Some possible goals are:

  •  Increase the capability and reduce the cost for telecommunication services for business with bandwidth between 10 to 100 Mbps to the Internet
  • Increase the capability and reduce the cost for telecommunication services to the home.  With bandwidth between 5 to 20 Mbps to the Internet
  • Increase Internet mobility (i.e., wireless) with bandwidth around 1 to 3 Mbps

The first goal is to ensure that businesses can get affordable Internet service so that they can run their business applications and bandwidth does not limit the capabilities of business-to-business and business-to-consumer applications.  Novel business applications could include voice and web integration, online video conferencing for customer support, and web-based services.  The second goal feeds what is the largest segment of the Internet population – the home user.  Applications here abound, including voice, downloading and streaming video, social networking, and consumer purchases of goods.  Internet mobility enables business to work on the move and provides the foundation for applications that are currently only in dreams.  However, these applications will only work if we ensure the quality of the services being provided.

To ensure quality, we may need to specify other important requirements over and above raw bandwidth:

  • All traffic must be treated the same, unless the customer or a service pays more for better service for a particular type of traffic
  • Customers much be able to buy at a reasonable rate 10 Mbps or more of sustained download speeds to enable streaming of High Definition video channels
  • Voice over IP traffic gets higher priority service, at least for three voice channels at no additional cost

·         Basic infrastructure must meet reasonable reliability goals that are close to traditional telco standards

If the government is going to subsidize network capital, then equality must be provided for traffic types, but we should not limit the ability for customers or suppliers to buy more capability.  The second goal is twofold.  First, providing 10 Mbps guaranteed to every home may cost too much, and second, there must be incentive for companies to provide additional services.  The next goal is to ensure that consumers and business have a real choice in voice service providers by leveraging Voice over IP (VoIP).  This is especially important if, as discussed below, the majority of the money goes to ILECs and cable companies.

With these goals and requirements, how do we determine where to spend money?  Since we still have a commercial marketplace, the government has to decide if it is going to play favorites.  This is especially important if the government wants to get the most bang for its buck (well, really the people’s bucks).  What does this mean?

  • Fund Sprint, Verizon Wireless, and AT&T to roll out 3G (and 4G) services to locations not currently served and more rapidly upgrade existing systems
  • Fund AT&T, Verizon, Qwest, and the other Incumbent Local Exchange (ILEC) providers (in many cases independent rural companies) to increase capability to every home
  • Fund the Cable companies to increase their capabilities to business and the home

Oops, where are the CLECs?  For business services, the typical CLEC may have dozens or a hundred or so buildings on-net in a particular city and tiny fraction consumers.   What this essentially means is that sending money to the CLECs will likely have less impact on providing a large number of businesses or consumers new services.  You may believe that this is unfortunate, but we could spend a million dollars on increasing capabilities for 100,000 consumers to 10 Mbps or spend the same million dollars and improve service for only 100 or so businesses.  The bottom line, the ILECs and cable companies have much more substantial fiber, copper, and rights-of-way than anyone else.

For wireless, the picture is more complicated.  There are many wireless companies, and incenting the development of new towers and systems means that again, you have to play favorites.  Approaches include:

  • Creation of municipal systems that then lease service to the major wireless companies
  • Map poorly served locations and create a bidding environment with the winner getting a regulated franchise for service in that area

How do we reconcile this against current law and regulation?  Since the government is making wholesale changes in the banking and healthcare markets, why not toss the Telecom Act too.  It is the telecom act that created the idea that the ILECs had to share their access lines and infrastructure.  Good for the CLECs, it made the business cases to upgrade ILEC facilities more difficult.  Companies and the government have spent much effort and expense to ensure that the ILECs play fair with CLECs.  In fact the ILECs could claim what the CLECs really accomplished was to cherry pick the larger higher revenue customers away, leaving lower margin smaller customers to the ILECs.  Because of this, one could argue that the current regulatory environment is not productive and in fact has led us as the current state of affairs where we are behind other nations in terms of affordable broadband penetration to the home and business.

One approach to change is to remove the cost of regulation and let the ILECs have their access lines back to do what they will – with a little old-style monopoly regulation to tame the beast.  In doing this, there are some consequences.  First, we are essentially abandoning competition in the wired home and the small business marketplace.  By subsidizing the incumbent providers we are going to hurt competitive access providers and simply reinforce the current duopoly, especially at the home (this being the cable company and the ILEC).  To make this work, we could go back to rate-of-return regulation with the government subsidizing the initial capital costs to enable the providers to get a fixed percentage return on operations.

Another approach, one that does not play favorites, is to establish a hyper marketplace:

  1. Create a government clearing house for all  requests for Internet services
  2. Service providers provide bids to the government that represent their capital costs to serve the location or area and the recurring costs for the service
  3. The government selects the service provider with the lowest evaluated lifecycle costs wins the bid, with the government subsidizing the capital cost

So, what works best?  Do we trust that a regulated duopoly will work or that a government run marketplace will do the trick?  In either case, the government now becomes the decider on what gets done or not, who wins and who loses.  There are many questions that are raised by this type of intervention, that include creating mechanisms to ensure technology upgrades.  Simply making winners and losers does not mean that the country is well served in the long term.  Unfortunately, the break-up of the Bell System that enabled the creation of new companies that brought new services, and yes the very Internet to the world, was long ago enough (1984) that many of us do not remember the previous technology stagnation.

Finally, there is an impact to service providers when we are able to provide high-quality Internet services that enable the streaming of High Definition video and Voice over IP services.  What is happening, and what will continue to happen is the decoupling of the physical service provider and the actual content itself.  A reasonable analogy has already happened in the music industry and the effect of Apple’s iTunes and like services has been the essential destruction of the previous distribution chain of music stores.  This could happen to the video game industry, but this industry has worked for years to make it extremely difficult to duplicate their software by keeping the creation and duplication of software for gaming consoles tightly controlled.

For the cable providers and ILECs providing cable and on-demand television services, this decoupling could be very frightening and may cause a re-evaluation of their business models for services such as FiOS and U-verse.  The reason is that these service providers make assumptions of Average Revenue Per User (a.k.a. ARPU) that included traditional cable TV subscriptions and premium and on-demand channel purchases.  With a decoupling of the service provider from the content and adequate bandwidth for streaming and downloadable content, consumers will be able to go closer to the source of the content and pick and choose what they want.  Services like Hulu and from the content providers themselves (e.g., USA Networks) are starting make large inroads into how people will watch television especially as a new generation of TVs come Internet enabled out-of-the-box.

Ooops, I think that I stumbled back onto the real question: What is the value chain in Internet services?  More on this in another post.

Monday, November 10, 2008

Elegance and Brute Force: Who Wins?

As computer applications become more demanding on network and computing resources, what is the best way to ensure that performance requirements are met and that other objectives, such as greening are also achieved?  I contend that not only can equipment be green, but that must be expanded to network data transport services, and must now include improving applications that use resource wasteful network protocols as well.

Over the past decade, I have witnessed the continual evolution of optical networking technology from transport to ensure user services.  At each level in the protocol stack along with applications, what is being done to improve the efficiency, or elegance, of the environment?  Clearly, much of the technology press is alive with issues related to data center efficiency.  The old days, of not too long ago, of adding a new server every time a new application was introduced into an enterprise Information Technology (IT) infrastructure is rapidly disappearing - The old brute force approach.  In fact, there are companies whose entire business model is structured around using technology such as Virtual Machine Mangers (VMMs) to reduce the number of discrete services in an enterprise’s data center.  This reduction in servers and increase in utilization of the remaining servers can often reduce data center costs, and the associated power and cooling, by 50% or more.

So, these data center changes have real impact on IT costs, but why do we stop there?  Why are we not always looking or demanding that efficiency be built into all aspects of the end-to-end service?  For example, at business and at home, lighting is being changed into Compact Florescent Lights (CLFs) which reduce lighting costs by over 60% or more, hybrid cars of various types are being introduced to improve fuel efficiency for transportation, more people than ever are taking mass transit and teleworking – all to reduce energy costs.

However, this is not a place to stop.  As I have written previously in my entry, Earth Hour for the Internet, telecommunications providers are under significant pressure to improve their transport and data infrastructures to improve the energy cost per bit transported.  In the optical transport space this means that service providers are choosing equipment that crams more data per optical channel (e.g., going from 10 Gbps to 40 Gbps) and more channels per optical line system (e.g., 32 to over 100).  In the data space, this means moving to Ethernet interfaces and switches to reduce the cost and power needed to provide data services.  Service providers are replacing to starting to consider replacing their full-blown core routers (i.e., the Provider or “P” routers) in their core Multi-Protocol Labeled Switch (MPLS) backbone with MPLS capable switches.  These switches are approximately 33% of the cost, 50% of the weight, and use 50% less power than the status quo.

So the data center is being worked, and service provider transport and data networks are being addressed, so what else is there?  Well, it’s the application stupid! And, as we all know “Stupid is as stupid does” and many applications use network resources inefficiently.  Two dimensions of application performance have traditionally been centered around how fast are the servers providing the data for the applications and how fast is the network providing services providing connectivity to the end-user locations.  The power and space efficiency of the servers is being addressed in data center improvements, and data network connectivity is being improved by service providers as discussed above.  However, is this all there is?  Absolutely not.

As applications have grown in richness (e.g., from text, to graphics, to pictures, to sound, and to video), the bandwidth required, for satisfactory performance from the user’s point-of-view, has continued to increase.  In addition, the number of Internet Protocol (IP) applications such as Voice over IP (VoIP) and video conferencing are also increasing bandwidth demands along with applications that never end, like Internet Radio, other streaming services, and even what used to be regarded as humble email applications.

The brute force approach, and essentially the approach used in virtually 95% of the time, is to continue to increase the raw bandwidth provided to each end-user location.  Of course, this is exactly what network service providers want customers to do.  However, from the perspective of the customer, the cost of increasing the capacity of enterprise-class dedicated access and services is extreme. For example, going from a T1 access speed (approximately 1.5 Mbps raw) to the next 2xT1 access speed is in general an increase in access costs of 100% and data services costs of around 80%.  This means that the next application that goes on the network may be the one that breaks the proverbial camel’s back.  So, the introduction of a new medical imaging application or collaboration application means that your network costs could increase by 90% on a monthly cost basis.  This may be untenable as the cost for deploying the application and the cost to maintain the network for the application may break an organization’s budget.

But, there is another area where brute force is in play.  For example, an IP packet that goes from a customer location to the server, for the same data file transfer, sends the full source and destination addresses of the customer’s computer and the servers computer over and over again – and in fact the move to IPv6 will make matters even worse.  What’s more, many applications are “chatty”, that is they send many packets between the client application and the server, establishing new communication sessions each time which has another whole set of inefficiencies.  One can say that applications are not developed with green in mind and neither were the basic data communication protocols (e.g., TCP and IP) created to be green.

However, all is not lost, as we can return to elegance as a way to improve this sad state of affairs.  Just as using VMM approaches have improved the efficiency of usage of servers, most applications and the protocols that they use to communicate over a network are vastly inefficient – but this can be addressed.

There are two areas that this can be attacked, and quite frankly should be attacked. The first is the demand that developers, especially for client-server based application, develop against a new green standard for information flow.  Just as there are other usability, security, and performance requirements for applications, a green standard for information flow between client applications and their supporting servicers needs to be put in place.  These standards would limit the number of extraneous network flows created and limit the amount of redundant information that flows between the client and the server.

Barring the incredibly unlikely scenario that green network use and implementation makes an impact on application development any time soon, other approaches, specifically WAN Acceleration appliances may be a valuable addition to a growing network.  These system understand how to improve the performance of TCP/IP using well known techniques such as head compression, as well novel techniques that reduced the amount of protocol chatter traffic, based on particular applications, over the WAN by limiting wasteful behavior to the high performance local LAN.  In addition, some do sophisticated caching techniques that actually recognize data that has been send over the WAN before and does not have to be sent in its entirety again.

WAN Acceleration systems have been field proven to reduce WAN requirements by up to 50% or more while at the same time improving the end-user experience.   This mean that with an investment of around 25% of one year’s network costs in WAN Acceleration equipment, enterprises may be able to postpone the 80%+ increase in monthly network service costs into the future, saving big bucks.

Reduce, Re-use, and Re-cycle is the mantra of waste reduction in the physical world.  So, to get green:

·         Reduce:

o   Use Virtual Machine techniques to reduce the number of physical servers be tailored to ensure good resource utilization, and therefore efficiency

o   Use WAN acceleration to reduce the unnecessary traffic flows between end-users and applications, reducing the need for WAN bandwidth therefore efficiency

o   Whether you buy or develop applications, demand that bandwidth and protocol efficiency is part of the requirements of the software

·         Re-use:

o   Find hardware that has multiple functions so that it can be used or re-used for new applications.  Buy a firewall that cannot be used for additional services (e.g., IDPS, anti-virus, etc.)

o   Use software or appliances that re-use information that has already been sent from a client to a server (or in the other direction).  Stop sending the same information back and forth that only clogs WAN pipes

·         Re-cycle:

o   Stop having to re-do all your applications to improve performance.  Find a solution that enables WAN performance improvement to eliminate or delay adding additional expensive bandwidth without having to re-write, re-engineer, and re-buy equipment and applications.