As computer applications become more demanding on network and computing resources, what is the best way to ensure that performance requirements are met and that other objectives, such as greening are also achieved? I contend that not only can equipment be green, but that must be expanded to network data transport services, and must now include improving applications that use resource wasteful network protocols as well.
Over the past decade, I have witnessed the continual evolution of optical networking technology from transport to ensure user services. At each level in the protocol stack along with applications, what is being done to improve the efficiency, or elegance, of the environment? Clearly, much of the technology press is alive with issues related to data center efficiency. The old days, of not too long ago, of adding a new server every time a new application was introduced into an enterprise Information Technology (IT) infrastructure is rapidly disappearing - The old brute force approach. In fact, there are companies whose entire business model is structured around using technology such as Virtual Machine Mangers (VMMs) to reduce the number of discrete services in an enterprise’s data center. This reduction in servers and increase in utilization of the remaining servers can often reduce data center costs, and the associated power and cooling, by 50% or more.
So, these data center changes have real impact on IT costs, but why do we stop there? Why are we not always looking or demanding that efficiency be built into all aspects of the end-to-end service? For example, at business and at home, lighting is being changed into Compact Florescent Lights (CLFs) which reduce lighting costs by over 60% or more, hybrid cars of various types are being introduced to improve fuel efficiency for transportation, more people than ever are taking mass transit and teleworking – all to reduce energy costs.
However, this is not a place to stop. As I have written previously in my entry, Earth Hour for the Internet, telecommunications providers are under significant pressure to improve their transport and data infrastructures to improve the energy cost per bit transported. In the optical transport space this means that service providers are choosing equipment that crams more data per optical channel (e.g., going from 10 Gbps to 40 Gbps) and more channels per optical line system (e.g., 32 to over 100). In the data space, this means moving to Ethernet interfaces and switches to reduce the cost and power needed to provide data services. Service providers are replacing to starting to consider replacing their full-blown core routers (i.e., the Provider or “P” routers) in their core Multi-Protocol Labeled Switch (MPLS) backbone with MPLS capable switches. These switches are approximately 33% of the cost, 50% of the weight, and use 50% less power than the status quo.
So the data center is being worked, and service provider transport and data networks are being addressed, so what else is there? Well, it’s the application stupid! And, as we all know “Stupid is as stupid does” and many applications use network resources inefficiently. Two dimensions of application performance have traditionally been centered around how fast are the servers providing the data for the applications and how fast is the network providing services providing connectivity to the end-user locations. The power and space efficiency of the servers is being addressed in data center improvements, and data network connectivity is being improved by service providers as discussed above. However, is this all there is? Absolutely not.
As applications have grown in richness (e.g., from text, to graphics, to pictures, to sound, and to video), the bandwidth required, for satisfactory performance from the user’s point-of-view, has continued to increase. In addition, the number of Internet Protocol (IP) applications such as Voice over IP (VoIP) and video conferencing are also increasing bandwidth demands along with applications that never end, like Internet Radio, other streaming services, and even what used to be regarded as humble email applications.
The brute force approach, and essentially the approach used in virtually 95% of the time, is to continue to increase the raw bandwidth provided to each end-user location. Of course, this is exactly what network service providers want customers to do. However, from the perspective of the customer, the cost of increasing the capacity of enterprise-class dedicated access and services is extreme. For example, going from a T1 access speed (approximately 1.5 Mbps raw) to the next 2xT1 access speed is in general an increase in access costs of 100% and data services costs of around 80%. This means that the next application that goes on the network may be the one that breaks the proverbial camel’s back. So, the introduction of a new medical imaging application or collaboration application means that your network costs could increase by 90% on a monthly cost basis. This may be untenable as the cost for deploying the application and the cost to maintain the network for the application may break an organization’s budget.
But, there is another area where brute force is in play. For example, an IP packet that goes from a customer location to the server, for the same data file transfer, sends the full source and destination addresses of the customer’s computer and the servers computer over and over again – and in fact the move to IPv6 will make matters even worse. What’s more, many applications are “chatty”, that is they send many packets between the client application and the server, establishing new communication sessions each time which has another whole set of inefficiencies. One can say that applications are not developed with green in mind and neither were the basic data communication protocols (e.g., TCP and IP) created to be green.
However, all is not lost, as we can return to elegance as a way to improve this sad state of affairs. Just as using VMM approaches have improved the efficiency of usage of servers, most applications and the protocols that they use to communicate over a network are vastly inefficient – but this can be addressed.
There are two areas that this can be attacked, and quite frankly should be attacked. The first is the demand that developers, especially for client-server based application, develop against a new green standard for information flow. Just as there are other usability, security, and performance requirements for applications, a green standard for information flow between client applications and their supporting servicers needs to be put in place. These standards would limit the number of extraneous network flows created and limit the amount of redundant information that flows between the client and the server.
Barring the incredibly unlikely scenario that green network use and implementation makes an impact on application development any time soon, other approaches, specifically WAN Acceleration appliances may be a valuable addition to a growing network. These system understand how to improve the performance of TCP/IP using well known techniques such as head compression, as well novel techniques that reduced the amount of protocol chatter traffic, based on particular applications, over the WAN by limiting wasteful behavior to the high performance local LAN. In addition, some do sophisticated caching techniques that actually recognize data that has been send over the WAN before and does not have to be sent in its entirety again.
WAN Acceleration systems have been field proven to reduce WAN requirements by up to 50% or more while at the same time improving the end-user experience. This mean that with an investment of around 25% of one year’s network costs in WAN Acceleration equipment, enterprises may be able to postpone the 80%+ increase in monthly network service costs into the future, saving big bucks.
Reduce, Re-use, and Re-cycle is the mantra of waste reduction in the physical world. So, to get green:
· Reduce:
o Use Virtual Machine techniques to reduce the number of physical servers be tailored to ensure good resource utilization, and therefore efficiency
o Use WAN acceleration to reduce the unnecessary traffic flows between end-users and applications, reducing the need for WAN bandwidth therefore efficiency
o Whether you buy or develop applications, demand that bandwidth and protocol efficiency is part of the requirements of the software
· Re-use:
o Find hardware that has multiple functions so that it can be used or re-used for new applications. Buy a firewall that cannot be used for additional services (e.g., IDPS, anti-virus, etc.)
o Use software or appliances that re-use information that has already been sent from a client to a server (or in the other direction). Stop sending the same information back and forth that only clogs WAN pipes
· Re-cycle:
o Stop having to re-do all your applications to improve performance. Find a solution that enables WAN performance improvement to eliminate or delay adding additional expensive bandwidth without having to re-write, re-engineer, and re-buy equipment and applications.
No comments:
Post a Comment